Find Theses

Theses
  • All
  • Scholar Profiles
  • Research Units
  • Research Output
  • Theses

41.Growth, Dielectrics Properties, and Reliability of High-k Thin Films Grown on Si and Ge Substrates

Author:Qifeng Lu 2018
Abstract:With the continuous physical size down scaling of Metal Oxide Semiconductor Field Effect Transistors (MOSFETs), silicon (Si) based MOS devices have reached their limits. To further decrease the minimum feature size of devices, high-k materials (with dielectric constants larger than that of silicon dioxide (SiO2), 3.9) have been employed to replace the SiO2 gate dielectric. However, there are higher densities of traps in high-k dielectrics than in the near trap free SiO2. Therefore, it is important to comprehensively investigate the defects and electron trapping/de-trapping properties of the oxides. Also, germanium (Ge) has emerged as a promising channel material to be used in high-speed metal-oxide-semiconductor (MOS) devices, mainly due to its high carrier mobility compared with that of silicon. However, due to the poor interface quality between the Ge substrate and gate dielectrics, it is difficult to fabricate high-performance germanium based devices. Therefore, an effective passivation method for the germanium substrate is a critical issue to be addressed to allow the fabrication of high quality Ge MOSFETs. To solve the above problems, the study of high-k materials and the passivation of germanium substrates was carried out in this research. In the first part of this work, lanthanide zirconium oxides (LaZrOx) were deposited on Si substrates using atomic layer deposition (ALD). The pulse capacitance-voltage (CV) technique, which can allow the completion of the CV sweep in several hundreds of microseconds, was employed to investigate oxide traps in the LaZrOx. The results indicate that: (1) more ii traps are observed in the LaZrOx when compared with measurements using the conventional CV characterization method; (2) the time-dependent trapping/de-trapping is influenced by edge times, pulse widths and peak to peak voltages (VPP) of the gate voltage pulses applied. Also, an anomalous behavior in the pulse CV curves, in which the relative positions of the forward and reverse CV traces are opposite to those obtained from the conventional measurements, was observed. A model relating to interface dipoles formed at the high-k/SiOx is proposed to explain this behavior. Formation of interface dipoles is caused by the oxygen atom density difference between the high-k materials and native oxides. In addition, a hump appears in the forward pulse CV traces. This is explained by the current displacement due to the pn junction formed between the substrate and inversion layer during the pulse CV measurement. Secondly, hafnium titanate oxides (TixHf1-xO2) with different concentrations of titanium oxide were deposited on p-type germanium substrates by ALD. X-ray Photoelectron Spectroscopy (XPS) was used to analyze the interface quality and chemical structure. The current-voltage (IV) and capacitance-voltage (CV) characteristics were measured using an Aglient B1500A semiconductor analyzer. The results indicate that GeOx and germinate are formed at the high-k/Ge interface and the interface quality deteriorates severely. Also, an increased leakage current is obtained when the HfO2 content in the TixHf1-xO2 is increased. A relatively large leakage current density (~10-3 A/cm2) is partially attributed to the deterioration of the interface between Ge and TixHf1-xO2 caused by the oxidation source from HfO2. The small band gap of iii the TiO2 also contributes to the observed leakage current. The CV characteristics show almost no hysteresis between the forward and reverse CV traces, which indicates low trap density in the oxide. Since deterioration of the interface quality was observed, an in-situ ZnO interfacial layer was deposited in the ALD system to passivate the germanium substrate. However, a larger distortion of the as-deposited sample was observed. Although the post deposition annealing (PDA) has a positive effect on the CV curves, there is an increase in frequency dispersion and the leakage current after PDA. Therefore, the ZnO interfacial layer is not an effective passivation layer for the germanium substrate. In addition, GeO is formed due to the reaction and GeO desorption from the gate oxide/Ge interface occurs, which also leads to the deterioration of the device performance. In the final part of this work, to circumvent the problems explored above, 0.1 mol/L propanethiol solution in 2-propanol, 0.1 mol/L octanethiol solution in 2-propanol, and 20% (NH4)2S solution in DI water were used to passivate the n-type germanium substrates before HfO2 dielectric thin films were deposited by ALD. The results show that an increase in the dielectric constant and a reduction in leakage current are obtained for the samples with chemical treatments. The sample passivated by octanethiol solution has the largest dielectric constant. The lowest leakage current density is observed for the sample passivated by (NH4)2S solution followed by the one passivated by octanethiol solution. In addition, effects of a TiN cap layer on the formation and suppression of GeO were investigated. It was found that the formation of GeO and iv desorption of the GeO form gate oxides/Ge interface are suppressed by the cap layer. As a result, an increase in dielectric constant from 8.2 to 13.5 and a lower leakage current density for a negatively applied voltage are obtained. Therefore, the passivation of the substrates by octanethiol or (NH4)2S solutions followed by the TiN cap layer is a useful technique for Ge based devices.

43.Dynamics of Learners' Emergent Motivational Disposition: The Case of EAP Learners at a Transnational English-Medium University

Author:Austin Cody Pack 2021
Abstract:This thesis aims to better understand the processes affecting the motivational dynamics of English for Academic Purposes (EAP) learners at a transnational education (TNE) university that uses English as its medium of instruction (EMI). It joins the ongoing discussion of how to leverage Complex Dynamic Systems Theory (CDST) to understand second language (L2) motivation and takes a special interest in understanding what demotivates students to study EAP. It employed a mixed methodology and two-stage research design to explore how EAP learners’ motivation changed over the course of a semester in their first year, as well as what the salient demotivating and motivating factors were for these students. First, motivation journals, motivation questionnaires, semi-structured interviews, and focus group discussions were leveraged to investigate how and why the motivation levels of 60 first year EAP students changed over a period of 10 weeks. Salient demotivating factors identified from the data were then further explored by means of a demotivation questionnaire that was administered to the larger student population (n=1517) in order to understand how frequently these factors were found to be a source of demotivation. Learners’ motivational disposition was found to be complex and multifaceted, changing frequently between motivated and demotivated states. Motivation constructs (e.g. L2 self guides, instrumentality, etc.) frequently used in previous L2 motivation studies did not sufficiently account for the changes in students’ motivational disposition from day to day. Instead, it was found that motivational disposition, or students’ willingness to expend effort to learn at any given moment, emerges from the complex and non-linear interaction of a multitude of factors internal and external to the language learner and language classroom. These factors exerted influences of different strengths on motivational disposition according to changes in time and context. Sources of demotivation were frequently associated with factors outside of the EAP classroom and sources of motivation were frequently associated with factors inside the EAP classroom. The study is significant for both theory and research methodology relating to L2 motivation. First, while CDST has been used as a metaphor for understanding dynamics of motivation, the current study provides evidence that characteristics of CDSs can be grounded in actual data (e.g. the emergent nature of motivation, sensitivity to initial conditions, etc.).  Second, based on these findings this thesis presents a new CDST informed model of language learning motivation. Third, it suggests it is necessary to move away from a binary way of thinking about motivational factors that categorizes them into a dichotomy of motivating/demotivating factors; a more complex and fluid understanding of motivational factors is needed. Lastly, it highlights the need for frequent sampling that ensures minimal time has passed between when students recollect motivating/demotivating experiences and the actual time those experiences occurred.

44.Multiview Video View Synthesis and Quality Enhancement using Convolutional Neural Networks

Author:Samer Jammal 2020
Abstract:Multiview videos, which are recorded from different viewpoints by multiple synchronized cameras, provide an immersive experience of 3D scene perception and more realistic 3D viewing experience. However, this imposes an enormous load on the acquisition, storage, compression, and transmission of multiview video data. Consequentially, new and advanced 3D video technologies for efficient representation and transmission of multiview data are important aspects for the success of multiview applications.  Various methods aiming at improving multiview video coding efficiency have been developed in this thesis, where convolutional neural networks are used as a core engine in these methods. The thesis includes two novel methods for accurate disparity estimation from stereo images. It proposes the use of convolutional neural networks with multi-scale correlation for disparity estimation. This method exploits the dependency between two feature maps by combining the benefits of using both a small correlating scale for fine details and a big scale for larger areas. Nevertheless, rendering accurate disparity maps for foreground and background objects with fine details in real scenarios is a challenging task. Thus, a framework with a three-stage strategy for the generation of high-quality disparity maps for both near and far objects is proposed. Furthermore, the current techniques for multiview data representation, even if they exploit inter-view correlation, require large storage size or bandwidth for transmission. Such bandwidth is almost linear with the number of transmitted views. To address thisproblem, we proposed a novel view synthesis method for multiview video systems. In this approach the intermediate views are solely represented using their edges while dropping their texture content. These texture contents can get synthesized using a convolutional neural network, by matching and exploiting the edges and other information in the central view. Experimental results verify the effectiveness of the proposed framework.  Finally, highly compressed multiview videos produce severe quality degradation. Thus, it is necessary to enhance the visual quality of highly compressed views at the decoder side. Consequentially,a novel method for multiview quality enhancement that directly learns an end-to-end mapping between the low-quality and high-quality views and recovers the details of the low-quality view is proposed.

45.Dual-functional carbon–based Interlayers towards high-performance Li-S batteries

Author:Ruowei Yi 2021
Abstract:For reducing carbon emission and alleviating pollution, people are gradually replacing the fossil fuel-employing combustion engines with new energy devices. The secondary batteries with high energy storage have become a hot alternative to power sources due to its zero emission during their operation. In recent years, as the most popular energy storage equipment in the battery market for mobile devices, lithium-ion battery is gradually showing a decline in the field of power battery, because its energy density (~ 150 Wh kg-1) has been unable to meet the demands of power equipment, and the current research has almost reached the theoretical capacity of lithium-ion battery electrode materials, and leaves little space for improvement. Therefore, academic research began to seek a variety of new battery systems to meet the needs of the industry. As a battery system based on the non-topological reaction between lithium anode and sulfur anode, lithium sulfur battery has a very high theoretical energy density (2567 Wh kg-1) and theoretical specific capacity (1672 mAh g-1), which is good enough to meet the energy density requirements (500-600 Wh kg-1) of power battery. Meanwhile, sulfur is of low cost and environmentally friendly, which is suitable for large-scale commercialization. Therefore, it is considered as a strong competitor of the next generation power supply. However, a series of shortcomings of lithium sulfur battery limit its large-scale application at present stage; for examples, the sluggish reaction kinetics of active sulfur and the degraded cyclic stability from shuttle effect. The improvement of both can ameliorate the rate performance and cycle stability of lithium sulfur battery, which are crucial to the practical application of power battery. In this thesis, in order to solve the above problems, the author first used a facile and scalable method to prepare carbon black/ PEDOT:PSS. The modified separator was applied to lithium sulfur battery as an improved interlayer of the cathode. The principle of improving sulfur cathode by the interlayer was studied by the electrochemical analysis. The high conductivity and polysulfide adsorption ability of the coating delivers an initial specific capacity of 1315 mAh g-1 at 0.2 C current, and 699 mAh g-1 at a high rate of 2 C current; secondly, for the purpose of reducing the density of the cathode interlayer, a three-dimensional graphene foam was chosen as the conductive substrate of the interlayer, and modified with the zinc oxide by atomic layer deposition (ALD), creating the self-standing three-dimensional graphene foam / nano zinc oxide interlayer. This interlayer leads to an initial specific capacity of 1051 mAh g-1 at a 0.5 C rate. Its low area density (0.15 mg cm-2) also reduces the influence on the energy density of the cathode. As a step forward, the two-dimensional Ti3C2Tx nanosheet (MXene) with high conductivity and polysulfide adsorption characteristics was selected as an alternative material of zinc oxide to modify the graphene foam (GFMX), which simplifies the synthesis process and enhances the electronic conductivity of the interlayer. After 120 cycles at 0.2 C, the lithium sulfur batteries still maintain a specific capacity of 867 mAh g-1 and 755 mAh g-1 at 2 C high rate current with the GFMX interlayer. In light of the significant improvement of the interlayer by MXene, the modified the MXene by an in-situ growth of nitrogen and nickel doped carbon nanosheets has been studied. Results show that the stacking of MXene is greatly reduced and the specific surface area of the material is increased, moreover, the adsorption capacity of polysulfides has been largely improved by the nitrogen doping. When using the obtained composite material as the separator coating, the lithium sulfur batteries exhibit 943 mAh g-1 specific capacity after 100 cycles at 0.2 C current, and 588 mAh g-1 specific capacity after 500 cycles at 1 C. The average cycle capacity decay rate is 0.069%, and the specific capacity of the high sulfur loading cathode (3.8 mg cm-2) is 946 mAh g-1, highlighting its potential applications in the high-performance lithium sulfur batteries.

46.Global Motion Compensation Using Motion Sensor to Enhance Video Coding Efficiency

Author:Fei Cheng 2018
Abstract:Throughout the current development of video coding technologies, the main improvements are increasing the number of possible prediction directions and adding more sizes and more modes for blocks coding. However, there are no major substantial changes in video coding technology. The conventional video coding algorithms works well for video with motions of directions parallel to the image plane, but their efficiency drops for other kinds of motions, such as dolly motions. But increasing number of videos are captured by moving cameras as the video devices are becoming more diversified and lighter. Therefore, a higher efficient video coding tool has to be used to compress the video for new video technologies. In this thesis, a novel video coding tool, Global Motion Estimation using Motion Sensor (GMEMS), is proposed. Then, a series related approaches are researched and evaluated. The main target of this tool is using advanced motion sensor technology and computer graphics tools to improve and extend the traditional motion estimation and compensation method, which could finally enhance the video coding efficiency. Meanwhile, the computational complexity of motion estimation method is reduced as some differences have been compensated. Firstly, a Motion information based Coding method for Texture sequences (MCT) is proposed and evaluated using H.264/AVC standard. In this method, a motion sensor commonly-used in smart-phones is employed to get the panning motion (rotational motion). The proposed method could compensate panning motion by using frame projection using camera motion and a new reference allocation method. The experimental results demonstrate the average video coding gain is around 0.3 dB. In order to apply this method to other different types of motions for texture videos, the distance information of the object in the scene from the camera surface, i.e. depth map, has to be used according to the image projection principle. Generally, depth map contains fewer details than texture, especially for the low-resolution case. Therefore, a Motion information based Coding scheme using Frame-Skipping for Depth map sequence (MCFSD) is proposed. The experimental results show that this scheme is effective for low resolution depth map sequences, which enhances the performance by around 2.0 dB. The idea of motion information assisted coding is finally employed to both texture sequence and depth map sequence for different types of motions. A Motion information based Texture plus Depth map Coding (MTDC) scheme is proposed for 3D videos. Moreover, this scheme is applied to H.264/AVC and the last H.265/HEVC video coding standard and tested for VGA resolution and HD resolution. The results show that the proposed scheme improves the performance under all the conditions. For VGA resolution under H.264/AVC standard, the average gain is about 2.0 dB. As the last H.265/HEVC enhances the video encoding efficiency, the average gain for HD resolution under H.265/HEVC standard drops to around 0.4 dB. Another contribution of this thesis is that a software plus hardware experimental data acquisition method is designed. The proposed motion information based video coding schemes require video sequences with accurate camera motion information. However, it is difficult to find proper dataset. Therefore, an embedded hardware based experimental data acquisition platform is designed to obtain real scene video sequences, while a CG based method is used to produce HD video sequences with accurate depth map.

47.An Integrated Life Cycle Assessment and System Dynamics Model for Evaluating Carbon Emissions from Construction and Demolition Waste Management of Building Refurbishment Projects

Author:Wenting Ma 2022
Abstract:Since the building sector accounts for more than one third of global carbon emissions, it is imperative that the sector mitigate its emissions to help reach the goal of the COP26 climate conference of achieving a global net zero by mid-century. Building refurbishment (BR) is key to reducing carbon emissions in the building sector by reducing the operational energy consumption of existing buildings instead of demolishing them and building new ones. China is a good example of a country encouraging refurbishment, since it has prioritized BR in its 14th Five-Year Building Energy Efficiency and Green Building Development Plan (2021-2025). Since the number of BR projects in China is therefore likely to significantly increase in the coming years, it is important to evaluate the carbon emissions associated with construction and demolition (C&D) waste to find optimal waste management solutions. However, there are no studies that have considered the carbon emissions of C&D waste management of BR projects from a whole life cycle perspective. This study fills the research gap by developing a novel LCA-SD model, which integrates the features of life cycle assessment (LCA) and system dynamics (SD) to evaluate the carbon emissions of C&D waste management of BR projects through non-linear and dynamic analysis from a whole life cycle perspective. Variables for evaluating the carbon emissions were first identified in four life cycle stages of C&D waste management of BR projects. Causal loop diagrams were then developed to demonstrate the interrelations of the variables in the different life cycle stages, and the novel LCA-SD stock and flow model was formulated based on the causal loop diagrams. The model was validated through a case study of a typical BR project in China. The validated LCA-SD model was used to compare and analyze waste management scenarios for the case study BR project by performing simulations of selected scenarios. The simulation results reveal that the secondary material utilization rate is the most effective independent variable for reducing carbon emissions from C&D waste management of the case BR project, 11.28% of total carbon emissions could be reduced by using 31% of secondary materials to substitute natural raw materials; improving the combustible waste incineration rate to 100% could reduce 6.42% of total carbon emissions; reducing 50% of the on-site waste rate could reduce 1.28% of total carbon emissions; while improving the inert waste recycling rate to 90% could only reduce 1% of total carbon emissions. From the whole life cycle perspective, the refurbishment material stage accounts for the highest carbon emissions, followed by the refurbishment material EOL stage, and the dismantlement stage, the refurbishment construction stage accounts for the least carbon emissions. The findings not only highlight the importance of cradle to cradle life cycle C&D waste management for mitigating carbon emissions from BR projects, but also demonstrate the effectiveness of the novel integrated LCA-SD model as an “experimental laboratory” for BR C&D waste management decision makers to conduct “what-if” dynamic simulation analysis for various scenarios before embarking on a project.

48.Optimization on the Electrical Performance of the Solution-processed Zinc Tin Oxide Thin-film Transistors and its Application Research for Artificial Synapses

Author:Tianshi Zhao 2022
Abstract:Thin-film transistors (TFTs), serving as the core components for the applications of the active matrix for liquid crystal displays (AMLCDs) and the active matrix for organic light emitting diodes (AMOLEDs), have been being intensively researched all over the world. For the past decades, in order to meet the display application requirements of high resolution, large screen size, and low power consumption, the metal oxides (MOs) semiconductors have been proposed and widely investigated for the fabrication of high-performance TFTs. Compared with the traditional TFTs based on the amorphous silicon (α-Si) technology, the MO based TFTs (MOTFTs) are reported to have much higher electron field-effect mobility (μFE) due to the large, spherical ns-orbitals (n≥4). Moreover, the MO semiconductor materials also have their advantages in transparent applications due to the wide bandgap (~3 eV). Therefore, the wide-bandgap MO semiconductors including indium (In), gallium (Ga), zinc (Zn), and Tin (Sn) based binary or multi-component oxides have gradually become the promising channel material candidates for advanced TFT based technologies. However, for the well-established vacuum-based MOs fabrication technologies such as magnetron sputtering, atomic layer deposition (ALD), and chemical vapor deposition (CVD), etc., the complex processes, high-demand equipment, and small depositing area heavily limit the development for low-cost MOs deposition. Therefore, the solution process, one feasible and facile route to deposit the MO films under an ambient condition was proposed and reported by the researchers. Nevertheless, every coin has two sides, there is often a trade-off between the low cost of solution methods and the high performance of TFTs. The solvent residues or incomplete annealing process may lead to the defects and ruin the performance of the devices. Accompanied by the challenges, many studies have been reported to deduce the side effects brought by solution process and plenty of breakthroughs also have been done. In another word, to fabricate the high electrical performance TFTs based on low-cost solution processes still have great room for development and are worthy of study. In this work, for environmental protection and cost reduction considerations, we mainly focus on the spin-coating based n-type In-free semiconductor zinc tin oxide (ZnSnO, ZTO). We firstly proposed a kind of deionized (DI) water solvent-based fabrication routine for ZTO semiconductor films. The fabrication process was operated under a low temperature (≤ 300℃) in air condition. Combining with the silicon dioxide (SiO2) dielectric layer, the TFTs with a μFE of 2 cm2V-1s-1 were successfully fabricated. Furthermore, with the help of the novel two-dimensional (2D) material MXene, we tuned the work function (WF) of the ZTO channel and optimized both the μFE (13.06 cm2V-1s-1) and the gate bias (GB) stability behaviors of the TFTs via depositing the homojunction structured channels. Subsequently, we replaced the SiO2 dielectric with the solution-processed high-k aluminum oxide (AlOx) films, the devices showed an increased μFE of 28.35 cm2V-1s-1 and applied to a resistor-load inverter successfully (Chapter 2). Secondly, besides the performance optimization, the solution-processed TFTs could also be applied to realize the advanced high-parallel neuromorphic network computing tasks. The TFTs that could meet this application requirement are regarded as the synaptic transistors (STs) and are decided to mimic the biological synapse. The operating basis is established on the hysteresis window in STs’ transfer characteristics and non-volatile multi-level variable channel conductance. Here we applied the MXene to the interface between the ZTO channel and the SiO2 dielectric layer and proposed a kind of floating-gate transistors (FGTs) with the functions of STs. The MXene induced FGTs (MXFGTs) successfully mimicked the typical behaviors of biological synapse under both the gate voltage (VGS) and channel incident ultraviolet (UV) light stimuli. To further explore the suitability of the MXFGTs in machine learning task, we utilized the classifier based on the artificial neural network (ANN) and the tested results of the devices to simulate the image classification process. The training and recognition results of the images based on the Modified National Institute of Standards and Technology (MNIST) database further proved the application potential of MXFGT in neural network (NN) system (Chapter 3). Finally, in Chapter 4, we further improved the light detecting behavior of the MXene based STs. A shell layer of germanium oxide (GeOx) was grown to cover the nanosheets of MXene through a facile solution method. The obtained GeOx-coated MXene (GMX) nanosheets were doped into the ZTO channel layer and fabricated into the GMX based STs (GMXSTs). Owning to the area enlarging function of the high electron density MXene core and the heterostructure of GeOx/ZTO bilayer, the GMXSTs showed excellent optoelectrical synaptic performance under visible light stimuli, which was highly improved over MXFGTs. Then, we applied the various responses of the devices under the different input lights into image target area detecting simulations. With the help of the detecting pre-process, the tasks of counting the fluorescent cells stained by 2-(4-Amidinophenyl)-6-indolecarbamidine dihydrochloride (DAPI) was correctly performed. Finally, the "night vision"-inspired and the brightness-adjusted image reconstruction results were presented, which further indicated the bright future of this kind of synaptic device in the application field of artificial visual perception.

49.Depth Assisted Background Modeling and Super-resolution of Depth Map

Author:Boyuan Sun 2018
Abstract:Background modeling is one of the fundamental tasks in the computer vision, which detects the foreground objects from the images. This is used in many applications such as object tracking, traffic analysis, scene understanding and other video applications. The easiest way to model the background is to obtain background image that does not include any moving objects. However, in some environment, the background may not be available and can be changed by the surrounding conditions like illumination changes (light switch on/off), object removed from the scene and objects with constant moving pattern (waving trees). The robustness and adaptation of the background are essential to this problem. Mixture of Gaussians (MOG) is one of the most widely used methods for background modeling using color information, whereas the depth map provides one more dimensional information of the images that is independent of the color. In this thesis, the color only based methods such as Gaussian Mixture Models (GMM), Hidden Markov Models (HMM), Kernel Density Estimation (KDE) are thoroughly reviewed firstly. Then the algorithm that jointly uses color and depth information is proposed, which uses MOG and single Gaussian model (SGM) to represent recent observations of the color and depth respectively. And the color-depth consistency check mechanism is also incorporated into the algorithm to improve the accuracy of the extracted background. The spatial resolution of the depth images captured from consumer depth camera is generally limited due to the element size of the senor. To overcome this limitation, depth image super-resolution is proposed to obtain the high resolution depth image from the low resolution depth image by making the inference on high frequency components. Deep convolution neural network has been widely successfully used in various computer vision tasks like image segmentation, classification and recognitions with remarkable performance. Recently, the residual network configuration has been proposed to further improve the performance. Inspired by this residual network, we redesign the popular deep model Super-Resolution Convolution Neural Network (SRCNN) for depth image super-resolution. Based on the idea of residual network and SRCNN structure, we proposed three neural network based approaches to address the problem of depth image super-resolution. In these approaches, we introduce the deconvolution layer into the network which enables the learning directly from original low resolution image to the desired high resolution image, instead of using conventional method like bicubic to interpolate the image before entering the network. Then in order to minimize the sharpness loss near the boundary regions, we add layers at the end of network to learn the residuals.

50.Clothing-based Interfaces for Multimodal Interactions

Author:Vijayakumar Nanjappan 2020
Abstract:Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Advances in fabric sensing technology allow us to combine multiple interface modalities together. However, most textile-based research uses unimodal approach and current input options have limitations such as gesture types and issues like low social acceptance when interactions are performed in public or in front of unfamiliar people. As an alternative, wrist-based gesture input has the extra benefit of supporting eyes-free interactions which are subtle, thus socially acceptable. In this research, we propose and develop two fabric-based multimodal interfaces (FABMMI) which supports, wrist, touch and combination of these gestures. To do that, we first investigated the acceptance and performance of using the wrist to perform multimodal inputs using FABMMIs for (1) in-vehicle controls and (2) handheld augmented reality (HAR) devices. Through the first user-elicitation study with 18 users, we devised a taxonomy of wrist and touch gestures for in-vehicle interactions using a wrist-worn FABMMI in a simulated driving setup. We provide an analysis of 864 gestures, the resulting in-vehicle gesture set with 10 unique gestures which represented 56% of the user preferred gestures. With our second user-elicitation study, we investigated the use of a fabric-based wearable device as an alternative interface option for performing interactions with HAR applications. We present results about users’ gestures preferences for hand-worn FABMMI by analysing 2,673 gestures from 33 participants for 27 HAR tasks. Our gesture set includes a total of 13 user-preferred gestures which are socially acceptable and comfortable to use for HAR devices and also derived an interaction vocabulary of the wrist and thumb-to-index touch gestures. To achieve the above-defined input possibilities, we developed strain sensors to capture wrist movements, and pressure sensors to detect touch inputs. Our sensors are graphene-modified polyester (PE) fabric and polyurethane (PU) foam respectively, on which the graphene loading is high, and it has better adhesion to both PE fabric and the PU foam, which can enhance the sensitivity and the lifetime of the sensors. Using our in-house developed sensors, we developed two prototypes: (1) WRISMMi, wrist-worn interface for in-vehicle interactions and (2) HARMMi, hand-worn device for HAR interactions. A linear regression model is used to set the global thresholds for different bending and pressing magnitude levels. We tested the suitability and performance of our prototypes for a set of interactions extrapolated from the two user-elicited studies. Our results suggest that FABMMIs are viable to support a variety of natural, eyes-free, and unobtrusive interactions in multitasking situations.

51.Learning Semantic Segmentation with Weak and Few-shot Supervision

Author:Bingfeng Zhang 2022
Abstract:Semantic segmentation, aiming to make dense pixel-level classification, is a core problem in computer vision. Requiring sufficient and accurate pixel-level annotated data during training, semantic segmentation has witnessed great progress with recent advances in a deep neural network. However, such pixel-level annotation is time-consuming and highly relies on human effort, and segmentation performance dramatically drops on unseen classes or the annotated data is not sufficient. In order to overcome the mentioned drawbacks, many researchers focus on learning semantic segmentation with weak and few-shot supervision, i.e., weakly supervised semantic segmentation and few-shot segmentation. Specifically, weakly supervised semantic segmentation aims to make pixel-level classification with weak annotations (e.g., bounding-box, scribble, and image-level) as supervision while few-shot segmentation attempts to segment unseen object classes with a few annotated samples. In this thesis, we mainly focus on image label supervised semantic segmentation, bounding-box supervised semantic segmentation, scribble supervised semantic segmentation, and few-shot segmentation. For weakly supervised semantic segmentation with image-level annotation, current approaches mainly adopt a two-step solution, which generates pseudo-pixel masks first that are then fed into a separate semantic segmentation network. However, these two-step solutions usually employ many bells and whistles in producing high-quality pseudo masks, making this kind of method complicated and inelegant. We harness the image-level labels to produce reliable pixel-level annotations and design a fully end-to-end network to learn to predict segmentation maps. Concretely, we firstly leverage an image classification branch to generate class activation maps for the annotated categories, which are further pruned into tiny reliable object/background regions. Such reliable regions are then directly served as ground-truth labels for the segmentation branch, where both global information and local information sub-branch are used to generate accurate pixel-level predictions. Furthermore, a new joint loss is proposed that considers both shallow and high-level features. For weakly supervised semantic segmentation with bounding-box level annotation, most existing approaches rely on a deep convolution neural network (CNN) to generate pseudo labels by initial seeds propagation. However, CNN-based approaches only aggregate local features, ignoring long-distance information. We proposed a graph neural network (GNN)-based architecture that takes full advantage of both local and long-distance information. We firstly transfer the weak supervision to initial labels, which are then formed into semantic graphs based on our newly proposed affinity Convolutional Neural Network. Then the built graphs are input to our graph neural network (GNN), in which an affinity attention layer is designed to acquire the short- and long-distance information from soft graph edges to accurately propagate semantic labels from the confident seeds to the unlabeled pixels. However, to guarantee the precision of the seeds, we only adopt a limited number of confident pixel seed labels, which may lead to insufficient supervision for training. To alleviate this issue, we further introduce a new loss function and a consistency-checking mechanism to leverage the bounding box constraint, so that more reliable guidance can be included for the model optimization. More importantly, our approach can be readily applied to bounding box supervised instance segmentation tasks or other weakly supervised semantic segmentation tasks, showing great potential to become a unified framework for weakly supervised semantic segmentation. For weakly supervised semantic segmentation with scribble level annotation, the regularized loss has been proven to be an effective solution for this task. However, most existing regularized losses only leverage static shallow features (color, spatial information) to compute the regularized kernel, which limits its final performance since such static shallow features fail to describe pair-wise pixel relationships in complicated cases. We propose a new regularized loss that utilizes both shallow and deep features that are dynamically updated in order to aggregate sufficient information to represent the relationship of different pixels. Moreover, in order to provide accurate deep features, we adopt a vision transformer as the backbone and design a feature consistency head to train the pair-wise feature relationship. Unlike most approaches that adopt a multi-stage training strategy with many bells and whistles, our approach can be directly trained in an end-to-end manner, in which the feature consistency head and our regularized loss can benefit from each other. For few-shot segmentation, most existing approaches use masked Global Average Pooling (GAP) to encode an annotated support image to a feature vector to facilitate query image segmentation. However, this pipeline unavoidably loses some discriminative information due to the average operation. We propose a simple but effective self-guided learning approach, where the lost critical information is mined. Specifically, through making an initial prediction for the annotated support image, the covered and uncovered foreground regions are encoded to the primary and auxiliary support vectors using masked ii GAP, respectively. By aggregating both primary and auxiliary support vectors, better segmentation performances are obtained on query images. Enlightened by our self-guided module for 1-shot segmentation, we propose a cross-guided module for multiple shot segmentation, where the final mask is fused using predictions from multiple annotated samples with high-quality support vectors contributing more and vice versa. This module improves the final prediction in the inference stage without re-training. 

52.Person Re-identification and Tracking in Video Surveillance

Author:Yanchun Xie 2020
Abstract:Video surveillance system is one of the most essential topics in the computer vision field. As the rapid and continuous increasement of using video surveillance cameras to obtain portrait information in scenes, it becomes a very important system for security and criminal investigations. Video surveillance system includes many key technologies, including the object recognition, the object localization, the object re-identification, object tracking, and by which the system can be used to identify or suspect the movements of the objects and persons. In recent years, person re-identification and visual object tracking have become hot research directions in the computer vision field. The re-identification system aims to recognize and identify the target of the required attributes, and the tracking system aims at following and predicting the movement of the target after the identification process. Researchers have used deep learning and computer vision technologies to significantly improve the performance of person re-identification. However, the study of person re-identification is still challenging due to complex application environments such as lightning variations, complex background transformations, low-resolution images, occlusions, and a similar dressing of different pedestrians. The challenge of this task also comes from unavailable bounding boxes for pedestrians, and the need to search for the person over the whole gallery images. To address these critical issues in modern person identification applications, we propose an algorithm that can accurately localize persons by learning to minimize intra-person feature variations. We build our model upon the state-of-the-art object detection framework, i.e., faster R-CNN, so that high-quality region proposals for pedestrians can be produced in an online manner. In addition, to relieve the negative effects caused by varying visual appearances of the same individual, we introduce a novel center loss that can increase the intra-class compactness of feature representations. The engaged center loss encourages persons with the same identity to have similar feature characteristics. Besides the localization of a single person, we explore a more general visual object tracking problem. The main task of the visual object tracking is to predict the location and size of the tracking target accurately and reliably in subsequent image sequences when the target is given at the beginning of the sequence. A visual object tracking algorithm with high accuracy, good stability, and fast inference speed is necessary. In this thesis, we study the updating problem for two kinds of tracking algorithms among the mainstream tracking approaches, and improve the robustness and accuracy. Firstly, we extend the siamese tracker with a model updating mechanism to improve their tracking robustness. A siamese tracker uses a deep convolutional neural network to obtain features and compares the new frame features with the target features in the first frame. The candidate region with the highest similarity score is considered as the tracking result. However, these kinds of trackers are not robust against large target variation due to the no-update matching strategy during the whole tracking process. To combat this defect, we propose an ensemble siamese tracker, where the final similarity score is also affected by the similarity with tracking results in recent frames instead of solely considering the first frame. Tracking results in recent frames are used to adjust the model for a continuous target change. Meanwhile, we combine adaptive candidate sampling strategy and large displacement optical flow method to improve its performance further. Secondly, we investigate the classic correlation filter based tracking algorithm and propose to provide a better model selection strategy by reinforcement learning. Correlation filter has been proven to be a useful tool for a number of approaches in visual tracking, particularly for seeking a good balance between tracking accuracy and speed. However, correlation filter based models are susceptible to wrong updates stemming from inaccurate tracking results. To date, little effort has been devoted to handling the correlation filter update problem. In our approach, we update and maintain multiple correlation filter models in parallel, and we use deep reinforcement learning for the selection of an optimal correlation filter model among them. To facilitate the decision process efficiently, we propose a decision-net to deal with target appearance modeling, which is trained through hundreds of challenging videos using proximal policy optimization and a lightweight learning network. An exhaustive evaluation of the proposed approach on the OTB100 and OTB2013 benchmarks show the effectiveness of our approach.

53.Determinants of Asymmetric Cost Behavior

Author:Yuxin Shan 2019
Abstract:Asymmetric cost behavior describes a non-linear association between the change of costs and change of sales. This dissertation consists of three papers identifying different determinants of the asymmetric cost behavior. In terms of the economic theory of sticky costs, the institutional theory and the “grabbing hand” theory, the first paper identifies three factors in China that increase the level of cost stickiness, including state ownership, the five-year government plan and the density of skilled labor. Using data from 34 OECD countries, the second paper provides empirical evidence showing that companies in high tax rate jurisdictions are more likely to have a greater level of cost stickiness than companies in low tax rate jurisdictions. Using the U.S. data, the third paper explores the association between high-quality information technology (IT) and the level of cost stickiness. Consistent with my expectation, empirical results show that high-quality IT weakens the asymmetric cost behavior. In addition, the third paper investigates the relationship between high-quality IT and audit efficiency, showing that high-quality IT enhances the audit quality and decreases audit fees. This study contributes to cost accounting literature by suggesting additional determinants affecting resource adjustment decisions of managers. Meanwhile, this study sheds light on the tax avoidance literature by providing empirical evidence for the effect of country-level statutory tax rates on “real” corporate decisions. This study also contributes to extant literature about the return to IT investments, showing that the quality of IT affect managers’ resource adjustment decisions and audit efficiency. Additionally, this study provides guidance for policymakers about how managers react to the change of government plans and regulations.

54.BIOAVAILABILITY-BASED ENVIRONMENTAL RISK ASSESSMENT OF THE IMPACTS FROM METAL TOXICANTS AND NUTRIENTS IN LAKE TAI

Author:Xiaokai Zhang 2020
Abstract: Environmental pollution has increasingly become a global issue in recent years. Heavy metals are the most prevalent pollutants and are persistent environmental contaminants since they cannot be degraded or destroyed. Environmental risk assessment (ERA) will pave the way for streamlined environmental impact assessment and environmental management of heavy metal contamination. Bioavailability is increasingly in use as an indicator of risk (the exposure of pollutants), and for this reason, whole-cell biosensors or bioreporters and speciation modelling have both become of increasing interest to determine the bioavailability of pollutants. While there is a great emphasis on metals as toxicants in the environment, some metals also serve as micronutrients. The same processes that introduce metals as pollutants into the environment also introduce metals that may function, in some cases, as micronutrients, which then have a role to play in eutrophication, i.e. excessive nutrient richness that is an impairment of many freshwater ecosystems and a prominent cause of harmful algal blooms. In this thesis, I cover a wide range of topics. A unifying theme is biological impacts of metals in the environment and what the implications are for environmental risk assessment. This thesis begins with my initial work in which I conducted laboratory experiments using a bioreporter, genetically engineered bacterial that can produce dose-dependent signals in response to target chemicals to test the bioavailability of lead (Pb) in aqueous system containing Pb-complexing ligands. Lead serves as a good model because of its global prevalence and toxicity. The studied ligands include ethylene diamine tetra-acetic acid (EDTA), meso-2,3 dimercaptosuccinic acid (DMSA), leucine (Leu), methionine (Met), cysteine (Cys), glutathione (GSH), and humic acid (HA). The results showed that EDTA, DMSA, Cys, GSH, and HA amendment significantly reduced Pb bioavailability to bioreporter with increasing ligand concentration, whereas Leu and Met had no notable effect on bioavailability at the concentrations tested.  Natural water samples from Lake Tai (Taihu) were also been studied which displayed that dissolved organic carbon in Taihu water significantly reduced Pb bioavailability. Meanwhile, the bioreporter results are in accord with the reduction of aqueous Pb2+ that I expected from the relative complexation affinities of the different ligands tested. These findings represented a first step toward using bioreporter technology to streamline an approach to ERA. Dissolved organic matter (DOM) plays an important role in both speciation modelling and bioavailability of heavy metals. Due to the variation of DOM properties in natural aquatic systems, improvements to the exiting standard one size fits-all approach to modelling metal-DOM interactions are needed for ERA. My next effort was to investigate variations in DOM and Pb-DOM binding across the regional expanse of Taihu. Results show that different DOM components are highly variable across different regions of Taihu, and bivariate and multivariate analyses confirm that water quality and DOM characterisation parameters are strongly interrelated. I find that the conditional stability constant of Pb-DOM binding is strongly affected by the water chemical properties and composition of DOM, though is not itself a parameter that differentiates lake water properties in different regions of the lake. The variability of DOM composition and Pb-DOM binding strength across Taihu is consistent with prior findings that a one-size-fits-all approach to metal-DOM binding may lead to inaccuracies in commonly used speciation models, and therefore such generalised approaches need improvement for regional-level ERA in complex watersheds. Based on the findings from the investigation of Pb-DOM complexation, I compared a one-size-fits-all approach to different methods of implementing site-specific variations in modelling. I was able to substantively improve the procedures to the existing speciation model commonly used in ERA applications. The results showed that the optimised model is much more accurate in agreement with bioreporter-measured bioavailable Pb. This streamlined approach to ERA that I developed has performed well in a first regional-scale freshwater demonstration. There is a close connection between environmental water and sediment contamination, and I also studied Pb bioavailability in lake sediemnt with a focus on the ramifications regarding environmental risk. For this work, I studied sediment samples from Brothers Water lake in the United Kingdom, a much simpler lake system than Taihu that is severly impacted by centuries of Pb-mining in the immediate vicinity. The results showed that the total concentration of Pb in the sediment has an inverse relationship with bioavailable Pb in the test samples, has a positive relationship with sediment particle size and sand content and a negative relationship with clay content. I find that the relative amount of bioavailable Pb in the lake sediments are low, although surface sediments may have much higher bioavailable Pb than deeper sediments. To address the issues of metals and other micronutrients on algal growth, I performed small-scale mesocosm nutrient limitation bioassays using boron (B), iron (Fe), cobalt (Co), copper (Cu), molybdenum (Mo), nitrogen (N) and phosphorus (P) on phytoplankton communities sampled from different locations in Taihu to test the relative effects of micronutrients on in situ algal assemblages. I found a number of statistically significant effects for micronutrient stimulation on growth or shift in algal assemblage. The most notable finding concerned copper, which, to my knowledge is unique in the literature. However, I am unable to rule out a homeostatic link between copper and iron. The results from my study concur with a small and emerging body of literature suggesting that the potential role of micronutrientss in harmful algal blooms and eutrophication requires further consideration in ERA and environmental management. The findings from this work are not only of interest to academics, but represent feasible approaches from which environmental practitioners may evaluate risk.   My work on Pb needs further validation, however would be validatable through impact assessment studies and is therefore directly and immediately extensible to environmental risk. I am therefore hopeful that my work on ERA will drive tangible outcomes in the work of environmental management. Likewise, though my work on the affect of micronutrients on algal growth is more fundamental than applied at present, there are important and immediate implications for environmental management: at present, copper is used as an algicide. My work suggests the long term effect of copper at 20 μg·L-1 could possibly encourage rather than inhibit harmful algal blooms. It is satisfying to arrive at a scientifically interesting, and at the same time practically useful outcome from my years’ of work, however, I hope that this and other similar work on risk and management interventions could inspire a shift to pollution prevention rather than “end of pipe” solutions.  

55.Learning and Leveraging Structured Knowledge from User-Generated Social Media Data

Author:Hang Dong 2020
Abstract:Knowledge has long been a crucial element in Artificial Intelligence (AI), which can be traced back to knowledge-based systems, or expert systems, in the 1960s. Knowledge provides contexts to facilitate machine understanding and improves the explainability and performance of many semantic-based applications. The acquisition of knowledge is, however, a complex step, normally requiring much effort and time from domain experts. In machine learning as one key domain of AI, the learning and leveraging of structured knowledge, such as ontologies and knowledge graphs, have become popular in recent years with the advent of massive user-generated social media data. The main hypothesis in this thesis is therefore that a substantial amount of useful knowledge can be derived from user-generated social media data. A popular, common type of social media data is social tagging data, accumulated from users' tagging in social media platforms. Social tagging data exhibit unstructured characteristics, including noisiness, flatness, sparsity, incompleteness, which prevent their efficient knowledge discovery and usage. The aim of this thesis is thus to learn useful structured knowledge from social media data regarding these unstructured characteristics. Several research questions have then been formulated related to the hypothesis and the research challenges. A knowledge-centred view has been considered throughout this thesis: knowledge bridges the gap between massive user-generated data to semantic-based applications. The study first reviews concepts related to structured knowledge, then focuses on two main parts, learning structured knowledge and leveraging structured knowledge from social tagging data. To learn structured knowledge, a machine learning system is proposed to predict subsumption relations from social tags. The main idea is to learn to predict accurate relations with features, generated with probabilistic topic modelling and founded on a formal set of assumptions on deriving subsumption relations. Tag concept hierarchies can then be organised to enrich existing Knowledge Bases (KBs), such as DBpedia and ACM Computing Classification Systems. The study presents relation-level evaluation, ontology-level evaluation, and the novel, Knowledge Base Enrichment based evaluation, and shows that the proposed approach can generate high quality and meaningful hierarchies to enrich existing KBs. To leverage structured knowledge of tags, the research focuses on the task of automated social annotation and propose a knowledge-enhanced deep learning model. Semantic-based loss regularisation has been proposed to enhance the deep learning model with the similarity and subsumption relations between tags. Besides, a novel, guided attention mechanism, has been proposed to mimic the users' behaviour of reading the title before digesting the content for annotation. The integrated model, Joint Multi-label Attention Network (JMAN), significantly outperformed the state-of-the-art, popular baseline methods, with consistent performance gain of the semantic-based loss regularisers on several deep learning models, on four real-world datasets. With the careful treatment of the unstructured characteristics and with the novel probabilistic and neural network based approaches, useful knowledge can be learned from user-generated social media data and leveraged to support semantic-based applications. This validates the hypothesis of the research and addresses the research questions. Future studies are considered to explore methods to efficiently learn and leverage other various types of structured knowledge and to extend current approaches to other user-generated data.

56.Analyses of Investment Bank-Affiliated Mutual Fund Performance

Author:Obrey Michelo 2022
Abstract:This thesis presents three distinct essays on investment bank-affiliated mutual funds. The essays contribute to the ongoing debate on the net impact of investment bank-mutual fund relationships on investor wealth maximization. I approach this issue by addressing the following three specific research questions: First, do mutual funds affiliated with investment banks deliver better investment performance to investors than non-affiliated mutual funds? Second, do investment banks add investment value to affiliated mutual funds? Finally, if so, what is the potential possible mechanism of investment value creation? In the first essay, I study the performance of U.S. domestic equity mutual funds managed by fund families affiliated with investment banks. My analysis based on various performance metrics shows that investment bank-affiliated mutual funds significantly outperform peer mutual funds. Consistent with the information advantage hypothesis, I find that the outperformance is more pronounced among affiliated mutual funds that hold stocks covered by their equity research divisions. Overall, my findings are consistent with the idea that investment banks strategically transfer performance to their affiliated mutual funds that benefit fund investors in an economically meaningful way. In the second essay, I investigate whether the investment banks' equity research divisions add investment value to affiliated mutual funds. I find that stocks covered by the affiliated equity research division outperform non-covered stocks within an investment bank-affiliated mutual fund's portfolio. Consistent with the information flow hypothesis, I find that the highly (lowly) held covered stocks outperform the highly (lowly) held non-covered stocks significantly. Furthermore, the results also reveal that newly purchased covered stocks significantly outperform the newly purchased non-covered stocks. Overall, these results suggest that investment banks' equity research divisions make a marginal contribution to affiliated funds by assisting fund managers in their covered stock selection and trading decisions. In the final essay, I explore how mutual funds affiliated with investment banks benefit from recommendations issued by their investment bank-affiliated analysts. Due to limitations in directly observing services or potential non-public information provided by the equity research division to investment bank-affiliated fund managers, I investigate this issue from a new direction. Specifically, I examine the investment bank-affiliated analyst recommendations that disagree profoundly with the consensus recommendations issued on stocks that at least one of the affiliated mutual funds holds in its portfolio. I find that the performance of the covered stocks is consistent (i.e., in the same direction) with investment bank-affiliated analysts’ dissent recommendation. Thus, investment bank-affiliated analysts’ dissent recommendations have investment value. I also find that dissent recommendations have more investment value when issued by the investment bank-affiliated analysts that are more experienced or employed by large and prestigious investment banks. My findings are consistent with the idea that the investment value of access to equity research division by investment bank-affiliated mutual funds is the highest when their sell-side analysts disagree profoundly with the consensus recommendations issued on covered stocks. Put together, the three essays' findings have significant implications. First, the findings improve the mutual fund investors and practitioners from mutual fund management firms' understanding of the functionality of investment bank-affiliated mutual funds. There is a need to revisit the oft-offered advice to prefer stand-alone funds over bank-affiliated funds. Second, the findings call for regulatory debate concerning the potential spillover effects between other businesses and mutual fund families affiliated with the investment bank. Indeed, there is a need for mutual fund regulators to facilitate better investor protection and a fairer market. Finally, the findings add to the literature investigating spillover effects in financial conglomerates offering multiple services.

57.Theoretical and Numerical study on Optimal Mortgage Refinancing Strategy

Author:Jin ZHENG 2015
Abstract:This work studies optimal refinancing strategy for the debtors on the view of balancing the profit and risk, where the strategy could be formulated as the utility optimization problem consisting of the expectation and variance of the discounted profit if refinancing. An explicit solution is given if the dynamic of the interest rate follows the affine model with zero-coupon bond price. The results provide some references to the debtors in dealing with refinancing by predicting the value of the contract in the future. Special cases are considered when the interest rates are deterministic functions. Our formulation is robust and applicable to all of the short rate stochastic processes satisfying the affine models.

58.Compressive Sensing Based Grant-Free Communication

Author:Yuanchen Wang 2022
Abstract:Grant-free communication, where each user can transmit data without following the strict access grant process, is a promising technique to reduce latency and support massive users. In this thesis, compressive sensing (CS), which exploits signal sparsity to recover data from a small sample, is investigated for user activity detection (UAD), channel estimation, and signal detection in grant-free communication, in order to extract information from the signals received by base station (BS). First, CS aided UAD is investigated by utilizing the property of quasi-time-invariant channel tap delays as the prior information for the burst users in internet of things (IoT). Two UAD algorithms are proposed, which are referred to as gradient based and time-invariant channel tap delays assisted CS (g-TIDCS) and mean value based and TIDCS (m-TIDCS), respectively. In particular, g-TIDCS and m-TIDCS do not require any prior knowledge of the number of active users like the existing approaches and therefore are more practical. Second, periodic communication as one of the salient features of IoT is considered. Two schemes, namely periodic block orthogonal matching pursuit (PBOMP) and periodic block sparse Bayesian learning (PBSBL), are proposed to exploit the non-continuous temporal correlation of the received signal for joint UAD, channel estimation, and signal detection. The theoretical analysis and simulation results show that the PBOMP and PBSBL outperform the existing schemes in terms of the success rate of UAD, bit error rate (BER), and accuracy in period estimation and channel estimation. Third, UAD and channel estimation for grant-free communication in the presence of massive users that are actively connected to the BS is studied. An iteratively UAD and signal detection approach for the burst users is proposed, where the interference of the connected users on the burst users is reduced by applying a preconditioning matrix to the received signals at the BS. The proposed approach is capable of providing significant performance gains over the existing algorithms in terms of the success of UAD and BER. Last but not least, since the physical layer security becomes a critical issue for grant-free communication, the channel reciprocity in time-division duplex systems is utilized to design environment-aware (EA) pilots derived from transmission channels to prevent eavesdroppers from acquiring users’ channel information. The proposed EA-pilots based approach possesses a high level of security by scrambling the eavesdropper’s normalized mean square error performance of channel estimation.

59.Public Participation in the Urban Regeneration Process - A comparative study between China and the UK

Author:Lei SUN 2016
Abstract:The primary aim of this research is to explore how the urban regeneration policies and practices are shaped by the larger social, political and economic structures respectively in China and the UK and how individual agents involved in the regeneration process formulate their strategies and take their actions and at the same time use discourses to legitimize their actions. It further probed the lessons could be learned by both countries from each other’s success or failure in implementing the regeneration initiatives. This thesis adopts a cross-national comparative strategy and intensively referenced the Variegated Neoliberalism, Neoliberal Urbanism and Critical Urban theory when developing its theoretical framework. The comparison was conducted at three levels. At national level, the evolution of urban regeneration and public participation policies and practices in both countries are compared; at city level, the neoliberal urban policies and their impacts on the development of two selected cities, which are respectively Liverpool in the UK and Xi’an in China are compared; at the micro level, the major players’ interactions and the discourses they used to underpin their actions in two selected case studies, which are the Kensington Regeneration in Liverpool and Drum Tower Muslim District in Xi’an are examined and compared. In carrying out the study, literatures regarding the transformation of urban policies in the two countries, detailed information in relation to the two selected cities and case studies are reviewed. Around 35 semi-structured interviews have been conducted. The research results had demonstrated the suitability of the Variegated Neoliberalism in explaining how the process of neoliberalization in both China and the UK are affected by non-market elements. It is found that the stage of economic development, the degree of decentralization, the feature of politics and the degree of state intervention in economic areas had played a significant role in shaping the unique features of urban regeneration policies in the two countries. In spite of the differences, similar trends towards neoliberalization could be found in the evolution of urban regeneration policies and practices in both countries, including the elimination of public housing and low-rent accommodation, the creation of opportunities for speculative investment in real estate markets, the official discourses of urban disorder as well as the ‘entrepreneurial’ discourses and representations focused on urban revitalization and reinvestment are playing significant roles in the formation and implementation of regeneration policies in both countries. Moreover, similar tactics are used by municipal government in both countries to conquer resistances from local residents. In the research, it is also found that the discourses used by the municipal government in describing the regeneration project is heavily influenced by the Neoliberal Urbanism, which is significantly different from that used by local residents who intensively referenced concepts from the Critical Urban theory. It is suggested that the Chinese government should from its British counterpart’s experience in introducing partnerships in delivering urban regeneration programs and at the same to learn how to use the formal venues to resolve conflicts resulted in physical regeneration programs. For the British government, lessons could be learnt from China’s successful experiences in decentralization and the empowerment of municipalities.

60.The impacts of supply chain design on firm performance: perspectives from leadership, network structure, and resource dependency

Author:Taiyu Li 2022
Abstract: Supply chain management has an important role to play in business. To keep a company competitive and survive in international competition, it is important to optimise the processes of production, supply and sales. Today, modern operating companies are faced with an ever-increasing amount of information and problems, and the relationships between supply chain participants are becoming increasingly complex. These problems are even more acute in developing countries. This calls for advanced supply chain design to manage the entire operational processes of the business. Especially in the post-epidemic era, business development in many factories has become unpredictable. Transport logistics and costs are difficult to manage and control accurately. The importance of supply chain resilience is becoming more and more evident. The field of supply chain research needs to break out of its old boundaries and seek more diverse development models and supply chain designs. The results of this thesis first reveal how supply chain leadership affects supply chain performance, providing a theoretical basis for building efficient supply chain. After that, it further reveals the relationship between the risk contagion efficiency of the supply chain network and the individual competitiveness. After that, this thesis emphasizes the impact of resource dependence and operational slack on improving supply chain resilience, and further provides guidance and suggestions for designing supply chain structures in the post-COVID era. Finally, the discussion of supplier relationship and innovation performance provides suggestions on how to improve competitiveness and innovation ability when designing supply chain structure. This thesis conducts a rigorous quantitative analysis of the factors that need to be considered in supply chain design from multiple perspectives and provides a sufficient discussion of their impact, which contributes to research in the field of supply chain management. This thesis also provides policy and corporate operational management guidance for practitioners related to supply chains.
Total 200 results found
Copyright 2006-2020 © Xi'an Jiaotong-Liverpool University 苏ICP备07016150号-1 京公网安备 11010102002019号