Find Theses

  • All
  • Scholar Profiles
  • Research Units
  • Research Output
  • Theses

1.Green Supply Chain Management in Manufacturing Small and Medium‐sized Enterprises: Perspective from Chang Chiang Delta

Author:XiangMeng HUANG 2013
Abstract:This research started from an interest in how small and medium-sized enterprises (SMEs) in the manufacturing industry within the geographical area of Chang Chiang Delta in China operate with respect to sustainability by developing green supply chain management (GSCM). Therefore, the aim of this study is to investigate what the pressures are for SME manufacturers to implement GSCM practices, and to examine the relationship between those practices and corresponding performance at a regional level in the context of Chang Chiang Delta in China. To accomplish this task, a range of literature is evaluated, focusing on GSCM theories and adoptions. This review reveals a research gap regarding SMEs’ implementation of GSCM, to which this study responds. The research is underpinned by an interpretive epistemology and a multi-method design. It is an exploratory and empirical study with two rounds of primary data collection gathered from SME manufacturers in the Chang Chiang Delta region of China, which contains the triangular-shaped territory of Shanghai, southern Jiangsu Province and northern Zhejiang Province, including the urban cores of five cities – Shanghai, Nanjing, Hangzhou, Suzhou and Ningbo. In addition, a qualitative case study is employed in this research to provide more detailed information of GSCM implementation in SMEs. The results derived from both the questionnaire survey and the case study provide strong evidence that Chinese manufacturing SMEs have been under pressures relating to regulatory, customer, supplier, public and internal aspects from different stakeholder parties in terms of GSCM. In response to these pressures, SMEs have tried some GSCM practices, including green purchasing, eco-design, investment recovery, cooperation with customers and internal environmental management, and these practices are specific to the industrial sector considered in this study. But these practices do contribute to improving performance economically, environmentally and operationally. From the literature review and the empirical findings, this research provides contributions to knowledge, as well as managerial implications. It contributes to knowledge by providing conceptual and empirical insights into how GSCM is viewed and developed among SME manufacturers, clarifying the conceptions relating to sustainability, and incorporating stakeholder theory and the theory of industrial ecology in examining GSCM development. This study also provides practical implications by providing suggestions and guidance to governments, the public, suppliers and customers across the chain, as well as the managers of SMEs, and proposing an optimised model for the selected case for improved GSCM performance.

3.Statistical Feature Ordering for Neural-based Incremental Attribute Learning

Author:Ting WANG 2013
Abstract:In pattern recognition, better classification or regression results usually depend on highly discriminative features (also known as attributes) of datasets. Machine learning plays a significant role in the performance improvement for classification and regression. Different from the conventional machine learning approaches which train all features in one batch by some predictive algorithms like neural networks and genetic algorithms, Incremental Attribute Learning (IAL) is a novel supervised machine learning approach which gradually trains one or more features step by step. Such a strategy enables features with greater discrimination abilities to be trained in an earlier step, and avoids interference among relevant features. Previous studies have confirmed that IAL is able to generate accurate results with lower error rates. If features with different discrimination abilities are sorted in different training order, the final results may be strongly influenced. Therefore, the way to sequentially sort features with some orderings and simultaneously reduce the pattern recognition error rates based on IAL inevitably becomes an important issue in this study. Compared with the applicable yet time-consuming contribution-based feature ordering methods which were derived in previous studies, more efficient feature ordering approaches for IAL are presented to tackle classification problems in this study. In the first approach, feature orderings are calculated by statistical correlations between input and output. The second approach is based on mutual information, which employs minimal-redundancy-maximal- relevance criterion (mRMR), a well-known feature selection method, for feature ordering. The third method is improved by Fisher's Linear Discriminant (FLD). Firstly, Single Discriminability (SD) of features is presented based on FLD, which can cope with both univariate and multivariate output classification problems. Secondly, a new feature ordering metric called Accumulative Discriminability (AD) is developed based on SD. This metric is designed for IAL classification with dynamic feature dimensions. It computes the multidimensional feature discrimination ability in each step for all imported features including those imported in previous steps during the IAL training. AD can be treated as a metric for accumulative effect, while SD only measures the one-dimensional feature discrimination ability in each step. Experimental results show that all these three approaches can exhibit better performance than the conventional one-batch training method. Furthermore, the results of AD are the best of the three, because AD is much fitter for the properties of IAL, where feature number in IAL is increasing. Moreover, studies on the combination use of feature ordering and selection in IAL is also presented in this thesis. As a pre-process of machine learning for pattern recognition, sometimes feature orderings are inevitably employed together with feature selection. Experimental results show that at times these integrated approaches can obtain a better performance than non-integrated approaches yet sometimes not. Additionally, feature ordering approaches for solving regression problems are also demonstrated in this study. Experimental results show that a proper feature ordering is also one of the key elements to enhance the accuracy of the results obtained.

4.Real-time Interactive Video Streaming over Lossy Networks: High performance low delay error resilient algorithms

Author:Jimin XIAO 2013
Abstract:According to Cisco's latest forecast, two-thirds of the world's mobile data traffic and 62 percent of the consumer Internet traffic will be video data by the end of 2016. However, the wireless networks and Internet are unreliable, where the video traffic may undergo packet loss and delay. Thus robust video streaming over unreliable networks, i.e., Internet, wireless networks, is of great importance in facing this challenge. Specifically, for the real-time interactive video streaming applications, such as video conference and video telephony, the allowed end-to-end delay is limited, which makes the robust video streaming an even more difficult task. In this thesis, we are going to investigate robust video streaming for real-time interactive applications, where the tolerated end-to-end delay is limited. Intra macroblock refreshment is an effective tool to stop error propagations in the prediction loop of video decoder, whereas redundant coding is a commonly used method to prevent error from happening for video transmission over lossy networks. In this thesis two schemes that jointly use intra macroblock refreshment and redundant coding are proposed. In these schemes, in addition to intra coding, we proposed to add two redundant coding methods to enhance the transmission robustness of the coded bitstreams. The selection of error resilient coding tools, i.e., intra coding and/or redundant coding, and the parameters for redundant coding are determined using the end-to-end rate-distortion optimization. Another category of methods to provide error resilient capacity is using forward error correction (FEC) codes. FEC is widely studied to protect streamed video over unreliable networks, with Reed-Solomon (RS) erasure codes as its commonly used implementation method. As a block-based error correcting code, on the one hand, enlarging the block size can enhance the performance of the RS codes; on the other hand, large block size leads to long delay which is not tolerable for real-time video applications. In this thesis two sub-GOP (Group of Pictures, formed by I-frame and all the following P/B-frames) based FEC schemes are proposed to improve the performance of Reed-Solomon codes for real-time interactive video applications. The first one, named DSGF (Dynamic sub-GOP FEC Coding), is designed for the ideal case, where no transmission network delay is taken into consideration. The second one, named RVS-LE (Real-time Video Streaming scheme exploiting the Late- and Early-arrival packets), is more practical, where the video transmission network delay is considered, and the late- and early-arrival packets are fully exploited. Of the two approaches, the sub-GOP, which contains more than one video frame, is dynamically tuned and used as the RS coding block to get the optimal performance. For the proposed DSGF approach, although the overall error resilient performance is higher than the conventional FEC schemes, that protect the streamed video frame by frame, its video quality fluctuates within the Sub-GOP. To mitigate this problem, in this thesis, another real-time video streaming scheme using randomized expanding Reed-Solomon code is proposed. In this scheme, the Reed-Solomon coding block includes not only the video packets of the current frame, but also all the video packets of previous frames in the current group of pictures (GOP). At the decoding side, the parity-check equations of the current frameare jointly solved with all the parity-check equations of the previous frames. Since video packets of the following frames are not encompassed in the RS coding block, no delay will be caused for waiting for the video or parity packets of the following frames both at encoding and decoding sides. The main contribution of this thesis is investigating the trade-off between the video transmission delay caused by FEC encoding/decoding dependency, the FEC error-resilient performance, and the computational complexity. By leveraging the methods proposed in this thesis, proper error-resilient tools and system parameters could be selected based on the video sequence characteristics, the application requirements, and the available channel bandwidth and computational resources. For example, for the applications that can tolerate relatively long delay, sub-GOP based approach is a suitable solution. For the applications where the end-to-end delay is stringent and the computational resource is sufficient (e.g. CPU is fast), it could be a wise choice to use the randomized expanding Reed-Solomon code.

5.Robust Moving Object Detection by Information Fusion from Multiple Cameras

Author:Jie REN 2014
Abstract:Moving object detection is an essential process before tracking and event recognition in video surveillance can take place. To monitor a wider field of view and avoid occlusions in pedestrian tracking, multiple cameras are usually used and homography can be employed to associate multiple camera views. Foreground regions detected from each of the multiple camera views are projected into a virtual top view according to the homography for a plane. The intersection regions of the foreground projections indicate the locations of moving objects on that plane. The homography mapping for a set of parallel planes at different heights can increase the robustness of the detection. However, homography mapping is very time consuming and the intersections of non-corresponding foreground regions can cause false-positive detections. In this thesis, a real-time moving object detection algorithm using multiple cameras is proposed. Unlike the pixelwise homography mapping which projects binary foreground images, the approach used in the research described in this thesis was to approximate the contour of each foreground region with a polygon and only transmit and project the polygon vertices. The foreground projections are rebuilt from the projected polygons in the reference view. The experimental results have shown that this method can be run in real time and generate results similar to those using foreground images. To identify the false-positive detections, both geometrical information and colour cues are utilized. The former is a height matching algorithm based on the geometry between the camera views. The latter is a colour matching algorithm based on the Mahalanobis distance of the colour distributions of two foreground regions. Since the height matching is uncertain in the scenarios with the adjacent pedestrian and colour matching cannot handle occluded pedestrians, the two algorithms are combined to improve the robustness of the foreground intersection classification. The robustness of the proposed algorithm is demonstrated in real-world image sequences.

6.Cascade of Classi er Ensembles for Reliable Medical Image Classi cation

Author:Yungang ZHANG 2014
Abstract:Medical image analysis and recognition is one of the most important tools in modern medicine. Different types of imaging technologies such as X-ray, ultrasonography, biopsy, computed tomography and optical coherence tomography have been widely used in clinical diagnosis for various kinds of diseases. However, in clinical applications, it is usually time consuming to examine an image manually. Moreover, there is always a subjective element related to the pathological examination of an image. This produces the potential risk of a doctor to make a wrong decision. Therefore, an automated technique will provide valuable assistance for physicians. By utilizing techniques from machine learning and image analysis, this thesis aims to construct reliable diagnostic models for medical image data so as to reduce the problems faced by medical experts in image examination. Through supervised learning of the image data, the diagnostic model can be constructed automatically. The process of image examination by human experts is very difficult to simulate, as the knowledge of medical experts is often fuzzy and not easy to be quantified. Therefore, the problem of automatic diagnosis based on images is usually converted to the problem of image classification. For the image classification tasks, using a single classifier is often hard to capture all aspects of image data distributions. Therefore, in this thesis, a classifier ensemble based on random subspace method is proposed to classify microscopic images. The multi-layer perceptrons are used as the base classifiers in the ensemble. Three types of feature extraction methods are selected for microscopic image description. The proposed method was evaluated on two microscopic image sets and showed promising results compared with the state-of-art results. In order to address the classification reliability in biomedical image classification problems, a novel cascade classification system is designed. Two random subspace based classifier ensembles are serially connected in the proposed system. In the first stage of the cascade system, an ensemble of support vector machines are used as the base classifiers. The second stage consists of a neural network classifier ensemble. Using the reject option, the images whose classification results cannot achieve the predefined rejection threshold at the current stage will be passed to the next stage for further consideration. The proposed cascade system was evaluated on a breast cancer biopsy image set and two UCI machine learning datasets, the experimental results showed that the proposed method can achieve high classification reliability and accuracy with small rejection rate. Many computer aided diagnosis systems face the problem of imbalance data. The datasets used for diagnosis are often imbalanced as the number of normal cases is usually larger than the number of the disease cases. Classifiers that generalize over the data are not the most appropriate choice in such an imbalanced situation. To tackle this problem, a novel one-class classifier ensemble is proposed. The Kernel Principle Components are selected as the base classifiers in the ensemble; the base classifiers are trained by different types of image features respectively and then combined using a product combining rule. The proposed one-class classifier ensemble is also embedded into the cascade scheme to improve classification reliability and accuracy. The proposed method was evaluated on two medical image sets. Favorable results were obtained comparing with the state-of-art results.

7.Understanding EAP Learners’ Beliefs, Motivation and Strategies from a Socio-cultural Perspective: A Longitudinal Study at an English-Medium University in Mainland China

Author:Chili LI 2014
Abstract:Research on second language learners’ beliefs, motivation, and strategies has been growing in recent decades. However, few studies have been undertaken on Chinese tertiary learners of English for academic purposes (EAP) within a broader English as a foreign language (EFL) context. The current call for a socio-cultural theory in second language acquisition (SLA) has also highlighted the necessity of a socio-cultural approach to research on learners’ beliefs, motivation, and strategies. This study thus aims to fill these gaps by following a socio-cultural approach to examining changes in beliefs, motivation, and strategies of a cohort of Chinese tertiary EAP learners in Mainland China. The study is longitudinal and situated in a Sino-foreign university where English is used as the Medium of Instruction (EMI). Data of the study were collected through questionnaires and semi-structured interviews at two stages. The design of the questionnaires and interviews was informed by current discussion on learners’ beliefs, motivation, and strategies in the literature of second language teaching and research. At the first stage, the questionnaire was administered to 1026 students upon their arrival at the EMI University and 16 students were selected for semi-structured interviews. At the second stage, after having studied EAP for one academic year at the EMI University, the questionnaire was distributed again to the same cohort of the students and semi-structured interviews were conducted with the same group of participants in order to identify potential changes in their beliefs, motivation, and strategies and to obtain an in-depth understanding of the nature of changes. The questionnaire surveys identified significant changes in the participants’ beliefs, motivation, and strategies after they had studied EAP for an academic year at the EMI University. The participants showed stronger beliefs about the difficulty and nature of language learning and autonomous language learning, a significant increase in motivation, and a higher level of use of learning strategies. Changes in the three learner variables were also found in the interviews. These changes indicate possible influence of learning context upon learners’ beliefs, motivation, and strategies. The analysis of the in-depth interviews further revealed that these changes were attributable to the mediation of various socio-cultural factors in the EMI setting, including the learning environment at the EMI University, studying content subjects in English, learning tasks, extracurricular activities, formative assessments, and other important factors such as teachers and peers. The interviews also illustrated that the dynamic changes in the participants’ beliefs, motivation, and strategies might be accounted for by the participants’ internalisation of the mediation of the socio-cultural factors through exercising their agency. Based on the findings, this research argues that the development of language learners’ beliefs, motivation, and strategies is the result of the interplay between agency and context. The present study deepens our understanding of the nature of learner development in that it contributes to the socio-cultural exploration of contextual influence on second language learning in SLA research. The study also has pedagogical significance for its practical recommendations for English language teaching in EMI settings in Mainland China and other similar EFL contexts.

8.Dielectric Relaxation and Frequency Dependence of HfO2 Doped by Lanthanide Elements

Author:Chun ZHAO 2014
Abstract:The decreasing sizes in complementary metal oxide semiconductor (CMOS) transistor technology requires the replacement of SiO2 with gate dielectrics that have a high dielectric constant (k). When the SiO2 gate thickness was reduced below 1.4 nm, electron tunneling effects and high leakage currents occurred which presented serious obstacles for the reliability issue in terms of metal-oxide-semiconductor field-effect transistor (MOSFET) devices. In recent years, various alternative gate dielectrics have been researched. Following the introduction of HfO2 into the 45 nm process by Intel in 2007, the screening and selection of high-k gate stacks, understanding their properties, and their integration into CMOS technology has been a very active research area. Frequency dispersion of high-k dielectrics was commonly observed and classified into two parts: extrinsic and intrinsic causes. The frequency dependence of the dielectric constant (k-value), that is the intrinsic frequency dispersion, could not be assessed before suppressing the effects of extrinsic frequency dispersion, such as the effects of the lossy interfacial layer (between the high-k thin film and silicon substrate) and the parasitic effects. The significance of parasitic effects (including series resistance and the back metal contact of the metal-oxide-semiconductor (MOS) capacitor) on frequency dispersion was studied. The effect of the lossy interfacial layer on frequency dispersion was investigated and modeled using a dual frequency technique. The effect of surface roughness on frequency dispersion is also investigated. Several mathematical models were discussed to describe the dielectric relaxation of high-k dielectrics. Some of the relaxation behavior can be modeled using the Curie-von Schweidler (CS) law, the Kohlrausch-Williams-Watts (KWW) relationship and the Havriliak-Negami (HN) relationship. Other relaxation models were also introduced. For the physical mechanism, dielectric relaxation was found to be related to the degree of polarization, which was dependent on the structure of the high-k material. The degree of polarization was attributed to the enhancement of the correlations among polar nano-scale size domain within the materials. The effect of grain size for the high-k materials' structure mainly originated from higher surface stress in smaller grain size due to its higher concentration of grain boundary.

9.Power Line Communications over Time-Varying Frequency-Selective Power Line Channels for Smart Home Applications

Author:Wenfei ZHU 2014
Abstract:Many countries in the world are developing the next generation power grid, the smart grid, to combat the ongoing severe environmental problems and achieve e?cient use of the electricity power grid. Smart metering is an enabling technology in the smart grid to address the energy wasting problem. It monitors and optimises the power consumption of consumers’ devices and appliances. To ensure proper operation of smart metering, a reliable communication infrastructure plays a crucial role. Power line communication (PLC) is regarded as a promising candidate that will ful?l the requirements of smart grid applications. It is also the only wired technology which has a deployment cost comparable to wireless communication. PLC is most commonly used in the low-voltage (LV) power network which includes indoor power networks and the outdoor LV distribution networks. In this thesis we consider using PLC in the indoor power network to support the communication between the smart meter and a variety of appliances that are connected to the network. Power line communication (PLC) system design in indoor power network is challeng-ing due to a variety of channel impairments, such as time-varying frequency-selective channel and complex impulsive noise scenarios. Among these impairments, the time-varying channel behaviour is an interesting topic that hasn’t been thoroughly investi-gated. Therefore, in this thesis we focus on investigating this behaviour and developing a low-cost but reliable PLC system that is able to support smart metering applications in indoor environments. To aid the study and design of such a system, the characterisation and modelling of indoor power line channel are extensively investigated in this thesis. In addition, a ?exible simulation tool that is able to generate random time-varying indoor power line channel realisations is demonstrated. Orthogonal frequency division modulation (OFDM) is commonly used in existing PLC standards. However, when it is adopted for time-varying power line channels, it may experience signi?cant intercarrier interference (ICI) due to the Doppler spreading caused by channel time variation. Our investigation on the performance of an ordinary OFDM system over time-varying power line channel reveals that if ICI is not properly compensated, the system may su?er from severe performance loss. We also investigate the performance of some linear equalisers including zero forcing (ZF), minimum mean squared error (MMSE) and banded equalisers. Among them, banded equalisers provide the best tradeo? between complexity and performance. For a better tradeo? between complexity and performance, time-domain receiver windowing is usually applied together with banded equalisers. This subject has been well investigated for wireless communication, but not for PLC. In this thesis, we in-vestigate the performance of some well-known receiver window design criteria that was developed for wireless communication for time-varying power line channels. It is found that these criteria do not work well over time-varying power line channels. There-fore, to ?ll this gap, we propose an alternative window design criterion in this thesis. Simulations have shown that our proposal outperforms the other criteria.

10.Optimization Approaches for Parameter Estimation and Maximum Power Point Tracking (MPPT) of Photovoltaic Systems

Author:Jieming MA 2014
Abstract:Optimization techniques are widely applied in various engineering areas, such as model-ing, identi?cation, optimization, prediction, forecasting and control of complex systems. This thesis presents the novel optimization methods that are used to control Photo-voltaic (PV) generation systems. PV power systems are electrical power systems energized by PV modules or cells. This thesis starts with the introduction of PV modeling methods, on which our re-search is based. Parameter estimation is used to extract the parameters of the PV models characterizing the utilized PV devices. To improve e?ciency and accuracy, we proposed sequential Cuckoo Search (CS) and Parallel Particle Swarm Optimization (PPSO) methods to extract the parameters for di?erent PV electrical models. Simu-lation results show the CS has a faster convergence rate than the traditional Genetic Algorithm (GA), Pattern Search (PS) and Particle Swarm Optimization (PSO) in se-quential processing. The PPSO, with an accurate estimation capability, can reduce at least 50% of the elapsed time for an Intel i7 quad-core processor. A major challenge in the utilization of PV generation is posed by its non linear Current-Voltage (I-V ) relations, which result in the unique Maximum Power Point (MPP) varying with di?erent atmospheric conditions. Maximum Power Point Tracking (MPPT) is a technique employed to gain maximum power available from PV devices. It tracks operating voltage corresponding to the MPP and constrains the operating point at the MPP. A novel model-based two-stage MPPT strategy is proposed in this thesis to combine the o?ine maximum power point estimation using the Weightless Swarm Algorithm (WSA) with an online Adaptive Perturb & Observe (APO) method. In addition, an Approximate Single Diode Model (ASDM) is developed for the fast evaluations of the output power. The feasibility of the proposed method is veri?ed in an MPPT system implemented with a Single-Ended Primary-Inductor Converter (SEPIC). Simulation results show the proposed MPPT method is capable of locating the operating point to the MPP under various environmental conditions.

11.Semi-Blind CFO Estimation and ICA based Equalization for Wireless Communication Systems

Author:Yufei JIANG 2014
Abstract:In this thesis, a number of semi-blind structures are proposed for Orthogonal Frequency Division Multiplexing (OFDM) based wireless communication systems, with Carrier Frequency Offset (CFO) estimation and Independent Component Analysis (ICA) based equalization. In the first contribution, a semi-blind non-redundant single-user Multiple-Input Multiple-Output (MIMO) OFDM system is proposed, with a precoding aided CFO estimation approach and an ICA based equalization structure. A number of reference data sequences are carefully designed and selected from a pool of orthogonal sequences, killing two birds with one stone. On the one hand, the precoding based CFO estimation is performed by minimizing the sum cross-correlations between the CFO compensated signals and the rest of the orthogonal sequences in the pool. On the other hand, the same reference data sequences enable the elimination of permutation and quadrant ambiguities in the ICA equalized signals. Simulation results show that the proposed semi-blind MIMO OFDM system can achieve a Bit Error Rate (BER) performance close to the ideal case with perfect Channel State Information (CSI) and no CFO. In the second contribution, a low-complexity semi-blind structure, with a multi-CFO estimation method and an ICA based equalization scheme, is proposed for multiuser Coordinated Multi-Point (CoMP) OFDM systems. A short pilot is carefully designed offline for each user and has a two-fold advantage. On the one hand, using the pilot structure, a complex multi-dimensional search for multiple CFOs is divided into a number of low-complexity mono-dimensional searches. On the other hand, the cross-correlation between the transmitted and received pilots is explored to allow the simultaneous elimination of permutation and quadrant ambiguities in the ICA equalized signals. Simulation results show that the proposed semi-blind CoMP OFDM system can provide a BER performance close to the ideal case with perfect CSI and no CFO. In the third contribution, a semi-blind structure is proposed for Carrier Aggregation (CA) based CoMP Orthogonal Frequency Division Multiple Access (OFDMA) systems, with an ICA based joint Inter-Carrier Interference (ICI) mitigation and equalization scheme. The CFO-induced ICI is mitigated implicitly via ICA based equalization, without introducing feedback overhead for CFO correction. The permutation and quadrant ambiguities in the ICA equalized signals can be eliminated by a small number of pilots. Simulation results show that with a low training overhead, the proposed semi-blind equalization scheme can provide a BER performance close to the ideal case with perfect CSI and no CFO.

12.Optimization Problems in Partial Differential Equations

Author:Yichen LIU 2015
Abstract:The primary objective of this research is to investigate various optimization problems connected with partial differential equations (PDE). In chapter 2, we utilize the tool of tangent cones from convex analysis to prove the existence and uniqueness of a minimization problem. Since the admissible set considered in chapter 2 is a suitable convex set in $L^infty(D)$, we can make use of tangent cones to derive the optimality condition for the problem. However, if we let the admissible set to be a rearrangement class generated by a general function (not a characteristic function), the method of tangent cones may not be applied. The central part of this research is Chapter 3, and it is conducted based on the foundation work mainly clarified by Geoffrey R. Burton with his collaborators near 90s, see [7, 8, 9, 10]. Usually, we consider a rearrangement class (a set comprising all rearrangements of a prescribed function) and then optimize some energy functional related to partial differential equations on this class or part of it. So, we call it rearrangement optimization problem (ROP). In recent years this area of research has become increasingly popular amongst mathematicians for several reasons. One reason is that many physical phenomena can be naturally formulated as ROPs. Another reason is that ROPs have natural links with other branches of mathematics such as geometry, free boundary problems, convex analysis, differential equations, and more. Lastly, such optimization problems also offer very challenging questions that are fascinating for researchers, see for example [2]. More specifically, Chapter 2 and Chapter 3 are prepared based on four papers [24, 40, 41, 42], mainly in collaboration with Behrouz Emamizadeh. Chapter 4 is inspired by [5]. In [5], the existence and uniqueness of solutions of various PDEs involving Radon measures are presented. In order to establish a connection between rearrangements and PDEs involving Radon measures, the author try to investigate a way to extend the notion of rearrangement of functions to rearrangement of Radon measures in Chapter 4.

13.Variational Inequalities and Optimization Problems

Author:Yina LIU 2015
Abstract:The primary objective of this research is to investigate various optimization problems connected with partial differential equations (PDE). In chapter 2, we utilize the tool of tangent cones from convex analysis to prove the existence and uniqueness of a minimization problem. Since the admissible set considered in chapter 2 is a suitable convex set in $L^infty(D)$, we can make use of tangent cones to derive the optimality condition for the problem. However, if we let the admissible set to be a rearrangement class generated by a general function (not a characteristic function), the method of tangent cones may not be applied. The central part of this research is Chapter 3, and it is conducted based on the foundation work mainly clarified by Geoffrey R. Burton with his collaborators near 90s, see [7, 8, 9, 10]. Usually, we consider a rearrangement class (a set comprising all rearrangements of a prescribed function) and then optimize some energy functional related to partial differential equations on this class or part of it. So, we call it rearrangement optimization problem (ROP). In recent years this area of research has become increasingly popular amongst mathematicians for several reasons. One reason is that many physical phenomena can be naturally formulated as ROPs. Another reason is that ROPs have natural links with other branches of mathematics such as geometry, free boundary problems, convex analysis, differential equations, and more. Lastly, such optimization problems also offer very challenging questions that are fascinating for researchers, see for example [2]. More specifically, Chapter 2 and Chapter 3 are prepared based on four papers [24, 40, 41, 42], mainly in collaboration with Behrouz Emamizadeh. Chapter 4 is inspired by [5]. In [5], the existence and uniqueness of solutions of various PDEs involving Radon measures are presented. In order to establish a connection between rearrangements and PDEs involving Radon measures, the author try to investigate a way to extend the notion of rearrangement of functions to rearrangement of Radon measures in Chapter 4.

14.Theoretical and Numerical study on Optimal Mortgage Refinancing Strategy

Author:Jin ZHENG 2015
Abstract:This work studies optimal refinancing strategy for the debtors on the view of balancing the profit and risk, where the strategy could be formulated as the utility optimization problem consisting of the expectation and variance of the discounted profit if refinancing. An explicit solution is given if the dynamic of the interest rate follows the affine model with zero-coupon bond price. The results provide some references to the debtors in dealing with refinancing by predicting the value of the contract in the future. Special cases are considered when the interest rates are deterministic functions. Our formulation is robust and applicable to all of the short rate stochastic processes satisfying the affine models.

15.Temperature-based Weather Derivatives Modeling and Contract Design in Mainland China

Author:Lu ZONG 2015
Abstract:In the presented thesis, we build the theoretical framework for the development of temperature-based weather derivatives market in China. Our research is divided into two separate studies due to their di erent scopes. In the rst study, we focus on the determination of the most precise model for temperature-based weather derivative modeling and pricing in China. To achieve this objective, a heuristic comparison of the new stochastic seasonal variation (SSV) model with three established empirical temperature and pricing models, i.e. the Ala-ton model [1], the CAR model [2] and the Spline model [3] is conducted. Comparison criteria include residual normality, residual auto-correlation function (ACF), Akaike information criterion (AIC), relative errors, and stability of price behaviors. The re- sults show that the SSV model dominates the other three models by providing both a more precise tting of the temperature process and more stable price behaviors. In the second study, novel forms of temperature indices are proposed and an- alyzed both on the city level and the climatic zone level, with the aim to provide a contract-selecting scheme that increases the risk management e ciency in the agricultural sector of China. Performances of the newly-introduced indices are in-vestigated via an e ciency test which considers the root mean square loss (RMSL),the value at risk (VaR) and the certainty-equivalent revenues (CERs). According to the results, agricultural risk management on the city scale can be optimized by using the absolute-deviation growth degree-day (GDD) index. On the other hand, it is suggested that climatic zone-based contracts can be more e cient compared with city-based contracts. The recommended contract-selection scheme is to purchase climatic zone-based average GDD contracts in climatic zone II, and to purchase climatic zone-based optimal-weighted GDD contracts in climatic zone I or III.

16.A Corpus-based Register Analysis of Corporate Blogs-text types and linguistic features

Author:Yang WU 2016
Abstract:A main theme in sociolinguistics is register variation, a situation and use dependent variation of language. Numerous studies have provided evidence of linguistic variation across situations of use in English. However, very little attention has been paid to the language of corporate blogs (CBs), which is often seen as an emerging genre of computer-mediated communication (CMC). Previous studies on blogs and corporate blogs have provided important information about their linguistic features as well as functions; however, our understanding of the linguistic variation in corporate blogs remains limited in particular ways, because many of these previous studies have focused on individual linguistic features, rather than how features interact and what the possible relations between forms (linguistic features) and functions are. Given these limitations, it would be necessary to have a more systematic perspective on linguistic variation in corporate blogs. In order to study register variation in corporate blogs more systematically, a combined framework rooted in Systemic Functional Linguistics (SFL), and register theories (e.g., Biber, 1988, 1995; Halliday & Hasan, 1989) is adopted. This combination is based on some common grounds they share, which concern the functional view of language, co-occurrence patterns of linguistic features, and the importance of large corpora to linguistic research. Guided by this framework, this thesis aims to: 1) investigate the functional linguistic variations in corporate blogs, and identify the text types that are distinguished linguistically, as well as how the CB text types cut across CB industry-categories, and 2) to identify salient linguistic differences across text types in corporate blogs in the configuration of the three components of the context of situation - field, tenor, and mode of discourse. In order to achieve these goals, a 590,520-word corpus consisting of 1,020 textual posts from 41 top-ranked corporate blogs is created and mapped onto the combined framework which consists of Biber’s multi-dimensional (MD) approach and Halliday’s SFL. Accordingly, two sets of empirical analyses are conducted one after another in this research project. At first, by using a corpus-based MD approach which applies multivariate statistical techniques (including factor analysis and cluster analysis) to the investigation of register variation, CB text types are identified; and then, some linguistic features, including the most common verbs and their process types, personal pronouns, modals, lexical density, and grammatical complexity, are selected from language metafunctions of mode, tenor and field within the SFL framework, and their linguistic differences across different text types are analysed. The results of these analyses not only show that the corporate blog is a hybrid genre, representing a combination of various text types, which serve to achieve different communicative purposes and functional goals, but also exhibit a close relationship between certain text types and particular industries, which means the CB texts categorized into a certain text type are mainly from a particular industry. On this basis, the lexical and grammatical features (i.e., the most common verbs, pronouns, modal verbs, lexical density and grammatical complexity) associated with Halliday’s metafunctions are further explored and compared across six text types. It is found that language features which are related to field, tenor and mode in corporate blogs demonstrate a dynamic nature: centring on an interpersonal function, the online blogs in a business setting are basically used for the purposes of sales, customer relationship management and branding. This research project contributes to the existing field of knowledge in the following ways: Firstly, it develops the methodology used in corpus investigation of language variation, and paves the way for further research into corporate blogs and other forms of electronic communication and, more generally, for researchers engaging in corpus-based investigations of other language varieties. Secondly, it adds greatly to a description of corporate blog as a language variety in its own right, which includes different text types identified in CB discourse, and some linguistic features realized in the context of situation. This highlights the fact that corporate blogs cannot be regarded as a simple discourse; rather, they vary according to text types and context of situation.

17.Molecular ecological characterization of a honey bee ectoparasitic mite, Tropilaelaps mercedesae

Author:Xiaofeng DONG 2016
Abstract:Tropilaelaps mercedesae (small mite) is one of two major honey bee ectoparasitic mite species responsible for the colony losses of Apis mellifera in Asia. Although T. mercedesae mites are still restricted in Asia (except Japan), they may diffuse all over the world due to the ever-increasing global trade of live honey bees (ex. Varroa destructor). Understanding the ecological characteristics of T. mercedesae at molecular level could potentially result in improving the management and control programs. However, molecular and genomic characterization of T. mercedesae remains poorly studied, and even no genes have been deposited in Genbank to date. Therefore, I conducted T. mercedesae genome and transcriptome sequencing. By comparing T. mercedesae genome with other arthropods, I have gained new insights into evolution of Parasitiformes and the evolutionary changes associated with specific habitats and life history of honey bee ectoparasitic mite that could potentially improve the control programs of T. mercedesae. Finally, characterization of T. mercedesae transient receptor potential channel, subfamily A, member 1 (TmTRPA1) would also help us to develop a novel control method for T. mercedesae.

18.Public Participation in the Urban Regeneration Process - A comparative study between China and the UK

Author:Lei SUN 2016
Abstract:The primary aim of this research is to explore how the urban regeneration policies and practices are shaped by the larger social, political and economic structures respectively in China and the UK and how individual agents involved in the regeneration process formulate their strategies and take their actions and at the same time use discourses to legitimize their actions. It further probed the lessons could be learned by both countries from each other’s success or failure in implementing the regeneration initiatives. This thesis adopts a cross-national comparative strategy and intensively referenced the Variegated Neoliberalism, Neoliberal Urbanism and Critical Urban theory when developing its theoretical framework. The comparison was conducted at three levels. At national level, the evolution of urban regeneration and public participation policies and practices in both countries are compared; at city level, the neoliberal urban policies and their impacts on the development of two selected cities, which are respectively Liverpool in the UK and Xi’an in China are compared; at the micro level, the major players’ interactions and the discourses they used to underpin their actions in two selected case studies, which are the Kensington Regeneration in Liverpool and Drum Tower Muslim District in Xi’an are examined and compared. In carrying out the study, literatures regarding the transformation of urban policies in the two countries, detailed information in relation to the two selected cities and case studies are reviewed. Around 35 semi-structured interviews have been conducted. The research results had demonstrated the suitability of the Variegated Neoliberalism in explaining how the process of neoliberalization in both China and the UK are affected by non-market elements. It is found that the stage of economic development, the degree of decentralization, the feature of politics and the degree of state intervention in economic areas had played a significant role in shaping the unique features of urban regeneration policies in the two countries. In spite of the differences, similar trends towards neoliberalization could be found in the evolution of urban regeneration policies and practices in both countries, including the elimination of public housing and low-rent accommodation, the creation of opportunities for speculative investment in real estate markets, the official discourses of urban disorder as well as the ‘entrepreneurial’ discourses and representations focused on urban revitalization and reinvestment are playing significant roles in the formation and implementation of regeneration policies in both countries. Moreover, similar tactics are used by municipal government in both countries to conquer resistances from local residents. In the research, it is also found that the discourses used by the municipal government in describing the regeneration project is heavily influenced by the Neoliberal Urbanism, which is significantly different from that used by local residents who intensively referenced concepts from the Critical Urban theory. It is suggested that the Chinese government should from its British counterpart’s experience in introducing partnerships in delivering urban regeneration programs and at the same to learn how to use the formal venues to resolve conflicts resulted in physical regeneration programs. For the British government, lessons could be learnt from China’s successful experiences in decentralization and the empowerment of municipalities.

19.Vision-based Driver Behaviour Analysis

Author:Chao YAN 2016
Abstract:With the ever-growing traffic density, the number of road accidents is anticipated to further increase. Finding solutions to reduce road accidents and to improve traffic safety has become a top-priority for many government agencies and automobile manufactures alike. It has become imperative to the development of Advance Driver Assistance Systems (ADAS) which is able to continuously monitor, not just the surrounding environment and vehicle state, but also driver behaviours. Dangerous driver behaviour including distraction and fatigue, has long been recognized as the main contributing factor in traffic accidents. This thesis mainly presents contribut- ing research on vision based driver distraction and fatigue analysis and pedestrian gait identification, which can be summarised in four parts as follows. First, the driver distraction activities including operating the shift lever, talking on a cell phone, eating, and smoking, are explored to be recognised under the framework of human action recognition. Computer vision technologies including motion history image and the pyramid histogram of oriented gradients, are applied to extracting discriminate feature for recognition. Moreover, A hierarchal classification system which considers different sets of features at different levels, is designed to improve the performance than conventional "flat" classification. Second, to solve the effectiveness problem in poor illuminations and realistic road conditions and to improve the performance, a posture based driver distraction recognition system is extended, which applies convolutional neural network (CNN) to automatically learn and predict pre-defined driving postures. The main idea is to monitor driver arm patterns with discriminative information extracted to predict distracting driver postures. Third, supposing to analysis driver fatigue and distraction through driver’s eye, mouth and ear, a commercial deep learning facial landmark locating toolbox (Face++ Research Toolkit) is evaluated in localizing the region of driver’s eye, mouth and ear and is demonstrated robust performance under the effect of illumination variation and occlusion in real driving condition. Then, semantic features for recognising different statuses of eye, mouth and ear on image patches, are learned via CNNs, which requires minimal domain knowledge of the problem.

20.Development of Low Cost CdS/CdTe Thin Film Solar Cells by Using Novel Materials

Author:Jingjin WU 2016
Abstract:cadmium Telluride (CdTe) thin film solar cells are one of the most promising solar cell technologies and share 5% of the photovoltaics market. CdTe thin film solar cells are expected to play a crucial role in the future photovoltaics market. The limitations of terawatt-scale CdTe solar cells deployment are scarcity of raw materials, low power conversion efficiency, and their stability. During the last few decades, intensive studies have been made to further understand the material properties, explore substitute materials, and get insight into the defect generation and distribution in solar cells. Yet, these problems are still not fully resolved. One of these significant topics is replacement of indium tin oxide (ITO). Following the introduction of aluminum doped zinc oxide (ZnO:Al or AZO) into thin film solar cells application, zinc oxide based transparent conducting oxides attract the attention from academic research institutes and industry. Zinc oxides are commonly doped with group III elements such as aluminium and gallium. Some researchers introduced group IV elements, including titanium, hafnium, zirconium, and obtained good properties. In our work, deposited zirconium doped zinc oxide (ZnO:Zr or ZrZO) by atomic layer deposition (ALD). Based on the advantage of precisely controlling of chemical ratio, the nature of ZrZO could be revealed. It is found that the ZrZO thin film has good thermal stability. By increasing zirconium concentration, the energy bandgap of ZrZO film follows the Burstein – Moss effect. Another issue of CdTe solar cells is the doping of CdTe thin films, low carrier concentration in CdTe thin films hinders the open circuit voltage and thus power conversion efficiency. Copper is a compelling element that is used as a CdTe dopant; however, high concentration of copper ions results in severe solar cell degradation. One approach was to evaporate a few nm thick copper on CdTe thin film followed with annealing. Another approach was to introduce a buffer layer in between the CdTe thin film and back metallic electrode. Numerous works have been shown that Sb2Te3 layer performs better than copper-based buffer layer, and the stability of carbon-based buffer layers, such as Graphene and single wall carbon nanotubes showed excellent permeability.
Total 200 results found
Copyright 2006-2020 © Xi'an Jiaotong-Liverpool University 苏ICP备07016150号-1 京公网安备 11010102002019号