School of Advanced Technology

ADDRESS
School of Advanced Technology
Xi'an Jiaotong-Liverpool University
111 Ren'ai Road Suzhou Dushu Lake Science and Education Innovation District , Suzhou Industrial Park
Suzhou,Jiangsu Province,P. R. China,215123
1. Enhanced LSTM with Batch Normalization

Author:Wang, LN;Zhong, GQ;Yan, SJ;Dong, JY;Huang, KZ

Source:NEURAL INFORMATION PROCESSING (ICONIP 2019), PT I,2019,Vol.11953

Abstract:Recurrent neural networks (RNNs) are powerful models for sequence learning. However, the training of RNNs is complicated because the internal covariate shift problem, where the input distribution at each iteration changes during the training as the parameters have been updated. Although some work has applied batch normalization (BN) to alleviate this problem in long short-term memory (LSTM), unfortunately, BN has not been applied to the update of the LSTM cell. In this paper, to tackle the internal covariate shift problem of LSTM, we introduce a method to successfully integrate BN into the update of the LSTM cell. Experimental results on two benchmark data sets, i.e. MNIST and Fashion-MNIST, show that the proposed method, enhanced LSTM with BN (eLSTM-BN), has achieved a faster convergence than LSTM and its variants, while obtained higher classification accuracy on sequence learning tasks.
2. Improve Deep Learning with Unsupervised Objective

Author:Zhang, SF;Huang, KZ;Zhang, R;Hussain, A

Source:NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I,2017,Vol.10634

Abstract:We propose a novel approach capable of embedding the unsupervised objective into hidden layers of the deep neural network (DNN) for preserving important unsupervised information. To this end, we exploit a very simple yet effective unsupervised method, i.e. principal component analysis (PCA), to generate the unsupervised "label" for the latent layers of DNN. Each latent layer of DNN can then be supervised not just by the class label, but also by the unsupervised "label" so that the intrinsic structure information of data can be learned and embedded. Compared with traditional methods which combine supervised and unsupervised learning, our proposed model avoids the needs for layer-wise pre-training and complicated model learning e.g. in deep autoencoder. We show that the resulting model achieves state-of-the-art performance in both face and handwriting data simply with learning of unsupervised "labels".
3. A Novel Search Interval Forecasting Optimization Algorithm

Author:Lou, Y;Li, JL;Shi, YH;Jin, LP

Source:ADVANCES IN SWARM INTELLIGENCE, PT I,2011,Vol.6728

Abstract:In this paper, we propose a novel search interval forecasting (SIF) optimization algorithm for global numerical optimization. In the SIF algorithm, the information accumulated in the previous iteration of the evolution is utilized to forecast area where better optimization value can be located with the highest probability for the next searching operation. Five types of searching strategies are designed to accommodate different situations, which are determined by the history information. A suit of benchmark functions are used to test the SIF algorithm. The simulation results illustrate the good performance of SIF, especially for solving large scale optimization problems.
4. Brain Storm Optimization Algorithm

Author:Shi, YH

Source:ADVANCES IN SWARM INTELLIGENCE, PT I,2011,Vol.6728

Abstract:Human being is the most intelligent animal in this world. Intuitively, optimization algorithm inspired by human being creative problem solving process should be superior to the optimization algorithms inspired by collective behavior of insects like ants, bee, etc. In this paper, we introduce a novel brain storm optimization algorithm, which was inspired by the human brainstorming process. Two benchmark functions were tested to validate the effectiveness and usefulness of the proposed algorithm.
5. Particle Filter Optimization: A Brief Introduction

Author:Liu, B;Cheng, S;Shi, YH

Source:ADVANCES IN SWARM INTELLIGENCE, ICSI 2016, PT I,2016,Vol.9712

Abstract:In this paper, we provide a brief introduction to particle filter optimization (PFO). The particle filter (PF) theory has revolutionized probabilistic state filtering for dynamic systems, while the PFO algorithms, which are developed within the PF framework, have not attracted enough attention from the community of optimization. The purpose of this paper is threefold. First, it aims to provide a succinct introduction of the PF theory which forms the theoretical foundation for all PFO algorithms. Second, it reviews PFO algorithms under the umbrella of the PF theory. Lastly, it discusses promising research directions on the interface of PF methods and swarm intelligence techniques.
6. Field Support Vector Regression

Author:Jiang, HC;Huang, KZ;Zhang, R

Source:NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I,2017,Vol.10634

Abstract:In regression tasks for static data, existing methods often assume that they were generated from an identical and independent distribution (i.i.d.). However, violation can be found when input samples may form groups, each affected by a certain different domain. In this case, style consistency exists within a same group, leading to degraded performance when conventional machine learning models were applied due to the violation of the i.i.d. assumption. In this paper, we propose one novel regression model named Field Support Vector Regression (F-SVR) without i.i.d. assumption. Specifically, we perform a style normalization transformation learning and the regression model learning simultaneously. An alternative optimization with final convergence guaranteed is designed, as well as a transductive learning algorithm, enabling extension on unseen styles during the testing phase. Experiments are conducted on two synthetic as well as two real benchmark data sets. Results show that the proposed F-SVR significantly outperforms many other state-of-the-art regression models in all the used data sets.
7. Brain Storm Optimization Algorithm for Multi-objective Optimization Problems

Author:Xue, JQ;Wu, YL;Shi, YH;Cheng, S

Source:ADVANCES IN SWARM INTELLIGENCE, ICSI 2012, PT I,2012,Vol.7331

Abstract:In this paper, a novel multi-objective optimization algorithm based on the brainstorming process is proposed(MOBSO). In addition to the operations used in the traditional multi-objective optimization algorithm, a clustering strategy is adopted in the objective space. Two typical mutation operators, Gaussian mutation and Cauchy mutation, are utilized in the generation process independently and their performances are compared. A group of multi-objective problems with different characteristics were tested to validate the effectiveness of the proposed algorithm. Experimental results show that MOBSO is a very promising algorithm for solving multi-objective optimization problems.
8. Deep Mixtures of Factor Analyzers with Common Loadings: A Novel Deep Generative Approach to Clustering

Author:Yang, X;Huang, KZ;Zhang, R

Source:NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I,2017,Vol.10634

Abstract:In this paper, we propose a novel deep density model, called Deep Mixtures of Factor Analyzers with Common Loadings (DMCFA). Employing a mixture of factor analyzers sharing common component loadings, this novel model is more physically meaningful, since the common loadings can be regarded as feature selection or reduction matrices. Importantly, the novel DMCFA model is able to remarkably reduce the number of free parameters, making the involved inferences and learning problem dramatically easier. Despite its simplicity, by engaging learnable Gaussian distributions as the priors, DMCFA does not sacrifice its flexibility in estimating the data density. This is particularly the case when compared with the existing model Deep Mixtures of Factor Analyzers (DMFA), exploiting different loading matrices but simple standard Gaussian distributions for each component prior. We evaluate the performance of the proposed DMCFA in comparison with three other competitive models including Mixtures of Factor Analyzers (MFA), MCFA, and DMFA and their shallow counterparts. Results on four real data sets show that the novel model demonstrates significantly better performance in both density estimation and clustering.
9. Learning Relations from Social Tagging Data

Author:Dong, H;Wang, W;Coenen, F

Source:PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I,2018,Vol.11012

Abstract:An interesting research direction is to discover structured knowledge from user generated data. Our work aims to find relations among social tags and organise them into hierarchies so as to better support discovery and search for online users. We cast relation discovery in this context to a binary classification problem in supervised learning. This approach takes as input features of two tags extracted using probabilistic topic modelling, and predicts whether a broader-narrower relation holds between them. Experiments were conducted using two large, real-world datasets, the Bibsonomy dataset which is used to extract tags and their features, and the DBpedia dataset which is used as the ground truth. Three sets of features were designed and extracted based on topic distributions, similarity and probabilistic associations. Evaluation results with respect to the ground truth demonstrate that our method outperforms existing ones based on various features and heuristics. Future studies are suggested to study the Knowledge Base Enrichment from folksonomies and deep neural network approaches to process tagging data.
10. An Improved Reversible Information Hiding Scheme Based on AMBTC Compressed Images

Author:Yi, PY;Yin, ZX;Feng, GR;Abel, AK

Source:CLOUD COMPUTING AND SECURITY, PT I,2017,Vol.10602

Abstract:This paper proposes an efficient reversible information hiding method based on AMBTC compressed images. At present, most digital images are stored and transmitted in compressed form, so research and development of such a scheme is necessary. Hong et al. proposed a reversible information hiding based approach based on AMBTC compression of images to provide considerable embedding capacity and effectively reduce the bit-rate. However, it did not carry out a detailed categorization of the error value or adopt the most suitable method for category information encoding, where error value is the difference of the original quantization value and the prediction quantization value. So there's possibility to reduce the bit-rate. In this paper, we propose an Improved Centralized Error Division (ICED) technique to conduct a more detailed categorization of the error value. In addition, we also adopt an optimal Huffman code to encode category information, so as to further reduce the bit-rate. Our experimental results show that our proposed approach has an equal embedding capacity as Hong et al.'s method and is higher than other relevant work. In addition, the proposed scheme has a lower bit-rate than Hong et al.'s method.
11. Improving Deep Neural Network Performance with Kernelized Min-Max Objective

Author:Yao, K;Huang, KZ;Zhang, R;Hussain, A

Source:NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I,2018,Vol.11301

Abstract:In this paper, we present a novel training strategy using kernelized Min-Max objective to enable improved object recognition performance on deep neural networks (DNN), e.g., convolutional neural networks (CNN). Without changing the other part of the original model, the kernelized Min-Max objective works by combining the kernel trick with the Min-Max objective and being embedded into a high layer of the networks in the training phase. The proposed kernelized objective explicitly enforces the learned object feature maps to maintain in a kernel space the least compactness for each category manifold and the biggest margin among different category manifolds. With very few additional computation costs, the proposed strategy can be widely used in different DNN models. Extensive experiments with shallow convolutional neural network model, deep convolutional neural network model, and deep residual neural network model on two benchmark datasets show that the proposed approach outperforms those competitive models.
12. Learning Latent Features with Infinite Non-negative Binary Matrix Tri-factorization

Author:Yang, X;Huang, KZ;Zhang, R;Hussain, A

Source:NEURAL INFORMATION PROCESSING, ICONIP 2016, PT I,2016,Vol.9947

Abstract:Non-negative Matrix Factorization (NMF) has been widely exploited to learn latent features from data. However, previous NMF models often assume a fixed number of features, say p features, where p is simply searched by experiments. Moreover, it is even difficult to learn binary features, since binary matrix involves more challenging optimization problems. In this paper, we propose a new Bayesian model called infinite non-negative binary matrix tri-factorizations model (iNBMT), capable of learning automatically the latent binary features as well as feature number based on Indian Buffet Process (IBP). Moreover, iNBMT engages a tri-factorization process that decomposes a nonnegative matrix into the product of three components including two binary matrices and a non-negative real matrix. Compared with traditional bi-factorization, the tri-factorization can better reveal the latent structures among items (samples) and attributes (features). Specifically, we impose an IBP prior on the two infinite binary matrices while a truncated Gaussian distribution is assumed on the weight matrix. To optimize the model, we develop an efficient modified maximization-expectation algorithm (ME-algorithm), with the iteration complexity one order lower than another recently-proposed Maximization-Expectation-IBP model [9]. We present the model definition, detail the optimization, and finally conduct a series of experiments. Experimental results demonstrate that our proposed iNBMT model significantly outperforms the other comparison algorithms in both synthetic and real data.
13. Parameter Estimation of Vertical Two-Layer Soil Model via Brain Storm Optimization Algorithm

Author:Ting, TO;Shi, YH

Source:ADVANCES IN SWARM INTELLIGENCE, ICSI 2016, PT I,2016,Vol.9712

Abstract:A practical soil model is derived mathematically based on the measurement principles of Wenner's method. The Wenner's method is a conventional approach to measuring the apparent soil resistivity. This soil model constitutes two-soil layer with different properties vertically. Thus this model is called the vertical two-layer soil model. The motivation for the mathematical model is to estimate relevant parameters accurately from the data obtained from site measurements. This parameter estimation is in fact a challenging optimization problem. From the plotted graphs, this problem features a continuous but non-smooth landscape with a steep alley. This poses a great challenge to any optimization tool. Two prominent algorithms are applied, namely Gauss-Newton (GN) and Brain Storm Optimization (BSO). Results obtained conclude that the GN is fast but diverges due to bad starting points. On the contrary, the BSO is slow but it never diverges and is more stable.
14. Depth-Based Stereoscopic Projection Approach for 3D Saliency Detection

Author:Lin, HY;Lin, CY;Zhao, Y;Xiao, JM;Tillo, T

Source:ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2015, PT I,2015,Vol.9314

Abstract:With the popularity of 3D display and the widespread using of depth camera, 3D saliency detection is feasible and significant. Different with 2D saliency detection, 3D saliency detection increases an additional depth channel so that we need to take the influence of depth and binocular parallax into account. In this paper, a new depth- based stereoscopic projection approach is proposed for 3D visual salient region detection. 3D images reconstructed with color and depth images are respectively projected onto XOZ plane and YOZ plane with the specific direction. We find some obvious characteristics that help us to remove the background and progressive surface where the depth is from the near to the distant so that the salient regions are detected more accurately. Then depth saliency map (DSM) is created, which is combined with 2D saliency map to obtain a final 3D saliency map. Our approach performs well in removing progressive surface and background which are difficult to be detected in 2D saliency detection.
15. Learning from Few Samples with Memory Network

Author:Zhang, SF;Huang, KZ

Source:NEURAL INFORMATION PROCESSING, ICONIP 2016, PT I,2016,Vol.9947

Abstract:Neural Networks (NN) have achieved great success in pattern recognition and machine learning. However, the success of NNs usually relies on a sufficiently large number of samples. When fed with limited data, NN's performance may be degraded significantly. In this paper, we introduce a novel neural network called Memory Network, which can learn better from limited data. Taking advantages of the memory from previous samples, the new model could achieve remarkable performance improvement on limited data. We demonstrate the memory network in Multi-Layer Perceptron (MLP). However, it keeps straightforward to extend our idea to other neural networks, e.g., Convolutional Neural Networks (CNN). We detail the network structure, present the training algorithm, and conduct a series of experiments to validate the proposed framework. Experimental results show that our model outperforms the traditional MLP and other competitive algorithms in two real data sets.
16. Normalized Population Diversity in Particle Swarm Optimization

Author:Cheng, S;Shi, YH

Source:ADVANCES IN SWARM INTELLIGENCE, PT I,2011,Vol.6728

Abstract:Particle swarm optimization (PSO) algorithm can be viewed as a series of iterative matrix computation and its population diversity can be considered as an observation of the distribution of matrix elements. In this paper, PSO algorithm is first represented in the matrix format, then the PSO normalized population diversities are defined and discussed based on matrix analysis. Based on the analysis of the relationship between pairs of vectors in PSO solution matrix, different population diversities are defined for separable and non-separable problems, respectively. Experiments on benchmark functions are conducted and simulation results illustrate the effectiveness and usefulness of the proposed normalized population diversities.
17. Inertia Weight Adaption in Particle Swarm Optimization Algorithm

Author:Zhou, Z;Shi, YH

Source:ADVANCES IN SWARM INTELLIGENCE, PT I,2011,Vol.6728

Abstract:In Particle Swarm Optimization (PSO), setting the inertia weight w is one of the most important topics. The inertia weight was introduced into PSO to balance between its global and local search abilities. In this paper, first, we propose a method to adaptively adjust the inertia weight based on particle's velocity information. Second, we utilize both position and velocity information to adaptively adjust the inertia weight. The proposed methods are then tested on benchmark functions. The simulation results illustrate the effectiveness and efficiency of the proposed algorithm by comparing it with other existing PSOs.
18. Exponential Inertia Weight for Particle Swarm Optimization

Author:Ting, TO;Shi, YH;Cheng, S;Lee, S

Source:ADVANCES IN SWARM INTELLIGENCE, ICSI 2012, PT I,2012,Vol.7331

Abstract:The exponential inertia weight is proposed in this work aiming to improve the search quality of Particle Swarm Optimization (PSO) algorithm. This idea is based on the adaptive crossover rate used in Differential Evolution (DE) algorithm. The same formula is adopted and applied to inertia weight, w. We further investigate the characteristics of the adaptive w graphically and careful analysis showed that there exists two important parameters in the equation for adaptive w; one acting as the local attractor and the other as the global attractor. The 23 benchmark problems are adopted as test bed in this study; consisting of both high and low dimensional problems. Simulation results showed that the proposed method achieved significant improvement compared to the linearly decreasing method technique that is used widely in literature.
19. Brain Storm Optimization in Objective Space Algorithm for Multimodal Optimization Problems

Author:Cheng, S;Qin, QD;Chen, JF;Wang, GG;Shi, YH

Source:ADVANCES IN SWARM INTELLIGENCE, ICSI 2016, PT I,2016,Vol.9712

Abstract:The aim of multimodal optimization is to locate multiple peaks/optima in a single run and to maintain these found optima until the end of a run. In this paper, brain storm optimization in objective space (BSO-OS) algorithm is utilized to solve multimodal optimization problems. Our goal is to measure the performance and effectiveness of BSO-OS algorithm. The experimental tests are conducted on eight benchmark functions. Based on the experimental results, the conclusions could be made that the BSO-OS algorithm performs good on solving multimodal optimization problems. To obtain good performances on multimodal optimization problems, an algorithm needs to balance its global search ability and solutions maintenance ability.
20. Global Motion Information Based Depth Map Sequence Coding

Author:Cheng, F;Xiao, JM;Tillo, T;Zhao, Y

Source:ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2015, PT I,2015,Vol.9314

Abstract:Depth map is currently exploited in 3D video coding and computer vision systems. In this paper, a novel global motion information assisted depth map sequence coding method is proposed. The global motion information of depth camera is synchronously sampled to assist the encoder to improve depth map coding performance. This approach works by down-sampling the frame rate at the encoder side. Then, at the decoder side, each skipped frame is projected from its neighboring depth frames using the camera global motion. Using this technique, the frame rate of depth sequence is down-sampled. Therefore, the coding rate-distortion performance is improved. Finally, the experiment result demonstrates that the proposed method enhances the coding performance in various camera motion conditions and the coding performance gain could be up to 2.04 dB.
Total 22 results found
Copyright 2006-2020 © Xi'an Jiaotong-Liverpool University 苏ICP备07016150号-1 京公网安备 11010102002019号