Find Theses

  • All
  • Scholar Profiles
  • Research Units
  • Research Output
  • Theses

1.The impacts of biotic and abiotic factors on resource subsidy processes - leaf litter breakdown in freshwaters

Author:Hongyong Xiang 2019
Abstract:Freshwaters are closely linked with adjacent terrestrial ecosystems through reciprocal resource subsidies, which are fluxes of nutrients, organisms, and materials between ecosystems. Terrestrial ecosystems provide many resource subsidies to freshwaters including leaf litter, one of the most prevalent terrestrial-derived subsidies. Inputs of leaf litter fuel detritivores food web, as food resources and refuges, and affect nutrients cycling in freshwaters. The decomposition of leaf litter is subjected to many biotic and abiotic factors, which makes it a good indicator of freshwater ecosystem functioning. Yet, this ecosystem process has been affected by anthropogenic disturbances that alter abiotic and biotic factors in the nature. Therefore, this thesis aimed to investigate some previously under-investigated or unclear but important factors that may affect the decomposition of leaf litter in streams. First, I reviewed the importance of resource subsidy fluxes between riparian zones and freshwaters and how these subsidies can influence recipient ecosystems. Then, I conducted a field experiment exploring the effects of anthropogenic carrion subsidy (chicken meat) and environmental-relevant concentration of glyphosate (the most widely applied herbicides worldwide) on leaf litter decomposition and invertebrate communities colonizing in the leaf-litter bags deploying in streams with different types of land use. Next, I conducted a mesocosm experiment nearby an urban stream to investigate the effects of water temperature (~ 8 oC above vs ambient), consumer - snails (presence vs absence), and leaf-litter quality (intact vs >40 % leaf area was consumed by terrestrial insects) on litter decomposition. Finally, I explored the global patterns of riparian leaf litter C, N, P, and their stoichiometric ratios to gradients of climatic (mean annual temperature and precipitation) and geographic (absolute latitude and altitude) factors, and the differences between biotic factors (phylogeny, leaf habit, N-fixing function, invasion status, and life form). The results of field experiment indicated that: in coarse mesh bags, glyphosate, carrion subsidy, and the addition of both decreased litter breakdown rates by 6.3 %, 22.6 %, and 24.3 % respectively; in fine mesh bags, glyphosate and the addition of both retarded litter breakdown rates by 8.3 % and 12.5 % respectively. Litter decomposition also differed among streams, with the highest breakdown rates in village streams and lowest in urban/suburban streams. Invertebrates were significantly different among streams, with biodiversity index and total taxon richness were highest in village streams and lowest in suburban stream. However, overall effects of carrion subsidy and glyphosate on macroinvertebrates were not significant. The results of mesocosm experiment indicated that warming and the presence of snails accelerated litter decomposition by 60.2 % and 34.9 % respectively, while litter breakdown rates of terrestrial insect damaged leaves were 5.1 % slower than intact leaves because of lower leaf litter quality. The results of meta-analysis study demonstrated that global riparian leaf litter had higher N and P, while lower C, C:N, and C:P ratios than terrestrial leaf litter in general. Riparian leaf litter quality changed with gradients of climatic and geographic predictors, and these patterns differed between leaf habits (evergreen or deciduous) and climate zones (tropical or non-tropical area). In general, my research provides important information on resource subsidy processes, which will benefit freshwater ecosystem management to support biodiversity and maintain ecosystem services.

2.Detection and Recognition of Traffic Scene Objects with Deep Learning

Author:Rongqiang Qian 2018
Abstract:Mobility is an element that is highly related to the development of society and the quality of individual life. Through mass of automobile production and traffic infrastructure construction, advanced countries have reached a high degree of individual mobility. In order to increase the efficiency, convenience and safety of mobility, advanced traffic infrastructure construction, transportation systems and automobiles should be developed. Among all the systems for modern automobiles, cameras based assistance systems are one of the most important components. Recently, with the development of driver assistance systems and autonomous cars, detection and recognition of traffic scene objects based on computer vision become more and more indispensable. On the other hand, the deep learning methods, in particular convolutional neural networks have achieved excellent performance in a variety of computer vision tasks. This thesis mainly presents the contributions to the computer vision and deep learning methods for traffic scene objects detection and recognition. The first approach develops numbers of methods for traffic sign detection and recognition. For traffic sign detection, template matching is applied with new features extended from chain code. Moreover, the region based convolutional neural networks are applied for detecting traffic signs painted on road surface. For traffic sign recognition, convolutional neural networks with a variety of architectures are trained with different training algorithms. The second approach focuses on the detection related to traffic text. A novel license plate detection framework is developed that is able to improve detection performance by simultane- ously completing detection and segmentation. Due to the larger number and complex layout of Chinese characters, Chinese traffic text detection faces more challenges than English text detection. Therefore, Chinese traffic texts are detected by applying convolutional neural networks and directed acyclic graph. The final approach develops a method for pedestrian attribute classification. Generally, there are irrelevant elements included in features of convolutional neural networks. In order to improve classification performance, a novel feature selection algorithm is developed to refine features of convolutional neural networks.

3.Deep Learning from Smart City Data

Author:Qi Chen 2022
Abstract:Rapid urbanisation brings severe challenges on sustainable development and living quality of urban residents. Smart cities develop holistic solutions in the field of urban ecosystems using collected data from different types of Internet of Things (IoT) sources. Today, smart city research and applications have significantly surged as consequences of IoT and machine learning technological enhancement. As advanced machine learning methods, deep learning techniques provide an effective framework which facilitates data mining and knowledge discovery tasks especially in the area of computer vision and natural language processing. In recent years, researchers from various research fields attempted to apply deep learning technologies into smart city applications in order to establish a new smart city era. Much of the research effort on smart city has been made, for example, intelligent transportation, smart healthcare, public safety, etc. Meanwhile, we still face a lot of challenges as the deep learning techniques are still premature for smart city. In this thesis, we first provide a review of the latest research on the convergence of deep learning and smart city for data processing. The review is conducted from two perspectives: while the technique-oriented view presents the popular and extended deep learning models, the application-oriented view focuses on the representative application domains in smart cities. We then focus on two areas, which are intelligence transportation and social media analysis, to demonstrate how deep learning could be used in real-world applications by addressing some prominent issues, e.g., external knowledge integration, multi-modal knowledge fusion, semi-supervised or unsupervised learning, etc. In intelligent transportation area, an attention-based recurrent neural network is proposed to learn from traffic flow readings and external factors for multi-step prediction. More specifically, the attention mechanism is used to model the dynamic temporal dependencies of traffic flow data and a general fusion component is designed to incorporate the external factors. For the traffic event detection task, a multi-modal Generative Adversarial Network (mmGAN) is designed. The proposed model contains a sensor encoder and a social encoder to learn from both traffic flow sensor data and social media data. Meanwhile, the mmGAN model is extended to a semi-supervised architecture by leveraging generative adversarial training to further learn from unlabelled data. In social media analysis area, three deep neural models are proposed for crisis-related data classification and COVID-19 tweet analysis. We designed an adversarial training method to generate adversarial examples for image and textual social data to improve the robustness of multi-modal learning. As most social media data related to crisis or COVID-19 is not labelled, we then proposed two unsupervised text classification models on the basis of the state-of-the-art BERT model. We used the adversarial domain adaptation technique and the zero-shot learning framework to extract knowledge from a large amount of unlabeled social media data. To demonstrate the effectiveness of our proposed solutions for smart city applications, we have collected a large amount of real-time publicly available traffic sensor data from the California department of transportation and social media data (i.e., traffic, crisis and COVID-19) from Twitter, and built a few datasets for examining prediction or classification performances. The proposed methods successfully addressed the limitations of existing approaches and outperformed the popular baseline methods on these real-world datasets. We hope the work would move the relevant research one step further in creating truly intelligence for smart cities.

4.Exploring the Chromium Poisoning Mechanisms and Development of New Ionic Electrolyte Materials in Solid Oxide Fuel Cell

Author:Meigeng Gao 2022
Abstract:Solid oxide fuel cell (SOFC) offers clean, renewable power generation with high efficiency, but it is susceptible to chromium poisoning that leads to considerable degradation of electrochemical performance. Volatile chromium species are released from Cr-containing interconnect materials, diffused, and deposited on electrodes in new phases by the interaction with electrode materials. The mechanisms of Cr poisoning are not clear completely yet, it requires studying how to alleviate Cr poisoning in SOFC.  This thesis presents several possible mechanisms and corresponding degradation phenomena on cathodes. While there have been several studies on chromium deposition on conventional La0.6Sr0.4Co0.2Fe0.8O3 (LSCF) cathode, to date little research has focused on understanding the microstructure, especially in porosity, the effect on the chromium deposition process. We analysed the microstructure of the initial ceramic at four porosities by various physisorption methods. We found that macropores dominate at porosity 50% (LSCF-50), while mesopores dominate at porosity 20% (LSCF-20). Porosity also changes the initial surface composition: Co-rich at LSCF-50 and Sr-rich at LSCF-20.  Upon Cr exposure, the phase and chemical state changes were identified by XRD, Raman spectroscopy, ICP- OES, and XPS, with respect to different porosities and ageing times. Among the Cr deposits, it appeared a novel phase, correlated with Cr substitution into LSCF lattice. The Cr deposition had three valence states of Cr, as a result of the atomic interactions and interfusion between Cr source and LSCF.  At the porous ceramic, Sr on the surface is correlated with the formation of SrCrO4, whereas the dense ceramic showed the lower concentration of SrCrO4 and favourably formation of Cr substitution into the LSCF lattice. The Cr adsorption also causes the redistribution of other cations at the surface and in bulk.  At LSCF-50, La, Fe and Co cations preferably dissolved into the bulk with ageing time, meanwhile, CoOx was formed and segregated at specific sites, associated with macropore distribution. The Cr penetration could be detected at depth up to 17 mm by EDX.  Depth profile showed the Cr concentration non-linearly decreased with depth. The Cr adsorption increased the concentration of Fe and Co in the near-surface region; moreover, the Sr enrichment was at the near-surface area and bulk. At LSCF-20, the surface concentration of La, Fe, and Co fluctuated with ageing time. Chromium deposition on LSCF at porosities at 20 and 50 had distinct kinetic mechanisms that may be caused by different gas transport preferences. In terms of porosity, the possible mechanism was proposed in the thesis. The density functional theory plus U (DFT+U) calculation method investigated the electronic structure and stability of LSCF and Cr-substituted perovskites at different spin states. Moreover, theoretical calculation predicted the decomposition products of LSCF and Cr-substituted perovskites and may explain the cation diffusion at the surface. LSCF and Cr doped LSF was suggested to decompose into bi/trinary oxides or/and simper perovskites. The decomposition pathway of Cr doped LSF greatly depends on temperature and environment. Computational approaches suggest the potential oxide electrolyte material LaSiAl5/6Ge1/6O5.083, LaSi5/6AlP1/6O5.083 and LaSi7/6Al5/6O5.083. The simulated interstitial positions suggest the possible pathway and predict the flexibility.  These insights can guide further composition optimization. The relatively high energy above the convex hull for other doping schemes of Ca2Al2SiO7, BaSi2O5 and K2Ba7Si16O40, indicates high phase instability, which suggests those are not good candidates for electrolyte material. 

5.Enjoyment in VR games: Factors, Challenges, and Simulator Sickness Mitigation Techniques

Author:Diego Vilela Monteiro 2021
Abstract:Although Virtual Reality (VR) has been developed for a while, the last decade has seen a surge in its popularity with the advent of commercial VR Head-Mounted Displays (HMDs), making the technology more accessible. One field that significantly benefits from VR is the entertainment industry, for example, games. Games can be challenging to design as they involve several components that are found in other types of applications as well, such as presentation, navigation, interaction with virtual agents, and in-game measurements. Despite recent advances, the optimal configurations for game applications in VR are still widely unexplored. In this thesis, we propose to fill this gap by a series of studies that analyse different components involved in making VR applications more enjoyable. We propose studying three characteristics that are heavily influential in game enjoyment (1) the aesthetical realism and emotions of virtual agents; (2) viewing perspective (First-Person Perspective and Third-Person Perspective), its influence on subjective feelings and how to measure those feelings; and (3) how to reduce or eliminate VR Sickness without affecting the experience (or affecting it positively). Our results showed that Virtual Agents' facial expressions are one of the most important aspects to be considered. On the second topic, we have observed that viewing perspective is influential on VR Sickness; however, other subjective feelings were challenging to measure in this context. On the last topic, we analysed existing tendencies in Simulator Sickness mitigation techniques that do not affect in-game mechanics and present a novel solution that has a good trade-off between mitigating VR Sickness and maintaining or enhancing immersion and performance. Finally, we propose some guidelines based on our results.

6.Large-scale functional annotation of individual RNA methylation sites by mining complex biological networks

Author:Xiangyu Wu 2021
Abstract:Increasing evidences suggest that post-transcriptional RNA modifications regulate essential biomolecular functions and are related to the pathogenesis of various diseases. To date, the study of epitranscriptome layer gene regulation is mostly focused on the function of mediator proteins of RNA methylation limited by laborious experimental procedures. However, there is limited investigation of the functional relevance of individual m6A RNA methylation sites. To address this, we annotated human m6A sites in large-scale based on the guilt-by-association principle from complex biological networks. In the first chapter, the network was constructed based on public human MeRIP-Seq datasets profiling the m6A epitranscriptome under independent experimental conditions. By systematically examining the network characteristics obtained from the RNA methylation profiles, a total of 339,158 putative gene ontology functions associated with 1446 human m6A sites were identified. These are biological functions that may be regulated at epitranscriptome layer via reversible m6A RNA methylation. The results were further validated on a soft benchmark by comparing to a random predictor. In the second chapter, another approach was applied to annotate the individual human m6A sites by integrating the methylation profile, gene expression profile and protein-protein interaction network with guilt-by-association principle. The consensus signals on sites were amplified by multiplying the co-methylation network and the methylation-expression network. The PPI network smoothed the correlation for a query site to gene expression for furthering GSEA functional annotation. In the third chapter, we functionally annotated 18,886 m6A sites that are conserved between human and mouse from a larger epitranscriptome datasets using method previously described. Besides, we also completed two side projects related to SARS-CoV-2 viral m6A site prediction and m6A site prediction from Nanopore sequencing technology.

7.A Corpus-based Register Analysis of Corporate Blogs-text types and linguistic features

Author:Yang WU 2016
Abstract:A main theme in sociolinguistics is register variation, a situation and use dependent variation of language. Numerous studies have provided evidence of linguistic variation across situations of use in English. However, very little attention has been paid to the language of corporate blogs (CBs), which is often seen as an emerging genre of computer-mediated communication (CMC). Previous studies on blogs and corporate blogs have provided important information about their linguistic features as well as functions; however, our understanding of the linguistic variation in corporate blogs remains limited in particular ways, because many of these previous studies have focused on individual linguistic features, rather than how features interact and what the possible relations between forms (linguistic features) and functions are. Given these limitations, it would be necessary to have a more systematic perspective on linguistic variation in corporate blogs. In order to study register variation in corporate blogs more systematically, a combined framework rooted in Systemic Functional Linguistics (SFL), and register theories (e.g., Biber, 1988, 1995; Halliday & Hasan, 1989) is adopted. This combination is based on some common grounds they share, which concern the functional view of language, co-occurrence patterns of linguistic features, and the importance of large corpora to linguistic research. Guided by this framework, this thesis aims to: 1) investigate the functional linguistic variations in corporate blogs, and identify the text types that are distinguished linguistically, as well as how the CB text types cut across CB industry-categories, and 2) to identify salient linguistic differences across text types in corporate blogs in the configuration of the three components of the context of situation - field, tenor, and mode of discourse. In order to achieve these goals, a 590,520-word corpus consisting of 1,020 textual posts from 41 top-ranked corporate blogs is created and mapped onto the combined framework which consists of Biber’s multi-dimensional (MD) approach and Halliday’s SFL. Accordingly, two sets of empirical analyses are conducted one after another in this research project. At first, by using a corpus-based MD approach which applies multivariate statistical techniques (including factor analysis and cluster analysis) to the investigation of register variation, CB text types are identified; and then, some linguistic features, including the most common verbs and their process types, personal pronouns, modals, lexical density, and grammatical complexity, are selected from language metafunctions of mode, tenor and field within the SFL framework, and their linguistic differences across different text types are analysed. The results of these analyses not only show that the corporate blog is a hybrid genre, representing a combination of various text types, which serve to achieve different communicative purposes and functional goals, but also exhibit a close relationship between certain text types and particular industries, which means the CB texts categorized into a certain text type are mainly from a particular industry. On this basis, the lexical and grammatical features (i.e., the most common verbs, pronouns, modal verbs, lexical density and grammatical complexity) associated with Halliday’s metafunctions are further explored and compared across six text types. It is found that language features which are related to field, tenor and mode in corporate blogs demonstrate a dynamic nature: centring on an interpersonal function, the online blogs in a business setting are basically used for the purposes of sales, customer relationship management and branding. This research project contributes to the existing field of knowledge in the following ways: Firstly, it develops the methodology used in corpus investigation of language variation, and paves the way for further research into corporate blogs and other forms of electronic communication and, more generally, for researchers engaging in corpus-based investigations of other language varieties. Secondly, it adds greatly to a description of corporate blog as a language variety in its own right, which includes different text types identified in CB discourse, and some linguistic features realized in the context of situation. This highlights the fact that corporate blogs cannot be regarded as a simple discourse; rather, they vary according to text types and context of situation.

8.A Near-field Wireless Power Transfer System with Planar Split-ring Loops for Medical Implants

Author:Jingchen Wang 2020
Abstract:With the continuous progress in science and technology, a myriad of implantable medical devices (IMDs) have been invented aimed at improving public health and wellbeing. One of the main problems with these devices is their limited battery lifetime. This results in otherwise unnecessary surgeries to replace depleted batteries leading to excessive medical expenses. Wireless power transfer (WPT), as a promising technology, could be used to remedy this. Wireless power technologies, both through the transfer of transmitted radio frequency (RF) power or the harvesting of RF energy from the ambient environment and its subsequent conversion to useable electrical energy, are emerging as important features for the future of electronic devices in general and have attracted an upsurge in research interest. Unfortunately, the path to realising this wire free charging dream is paved with many thorns and there still exist critical challenges to be addressed. This thesis aims to deal with some of these challenges, developing an efficient WPT system for IMDs. The work begins with a comprehensive study of currently applied methods of WPT, which broadly fall into two categories: far-field (radiative) WPT and near field (non-radiative) WPT. The review includes a brief history of WPT, comparisons between current methodologies applied and a comprehensive literature review. Magnetic resonance coupling (MRC) WPT is emphasised due to its advantages for the desired application making it the technology of choice for system development. Design of an MRC-WPT system requires an understanding of the performance of the four basic topologies available for the MRC method. Following an investigation of these, it is found that series primary circuits are generally most suitable for WPT and that the choice of a series or parallel secondary circuit is dependent on the relative size of the load impedance. Importantly, design parameters must be optimised to avoid the phenomena of frequency splitting to simultaneously obtain maximum power transfer efficiency (PTE) and load power. The use of printed spiral coils (PSCs) as inductors in the construction of WPT circuits for IMDs, which can save space and be integrated with other circuit boards, is then investigated. The challenges and issues of PSCs present for WPT mainly relate to maintaining an inductive characteristic at frequencies in the Medical Implant Communication Service (MICS) band and to maximising the PTE between primary and secondary circuits. Investigations of PSC design parameters are performed to obtain inductive characteristics at high frequencies and the split-ring loop is proposed to increase the Quality factor relative to that offered by the PSC, which is shown to enhance WPT performance. To simplify the necessary resonating circuit configuration for MRC-WPT, a self-resonating split-ring loop with a series inductor-capacitor characteristic has been developed. A pair of these self-resonators has been adopted into a series primary-series secondary WPT system operating at high frequency. This is different to traditional planar self-resonators, which offer parallel self-resonance characteristics that are less desirable due to their reduced system power insertion as a parallel primary resonator. Finally, a system for implantable devices is developed using the split-ring loop in consideration of the effects of body tissues, whose dielectric characteristics have a significant influence on WPT performance. Due concern is also paid to human safety from radiated RF power. A series resonating split-ring loop for transmitting power is formed at the desired frequency through the addition of a lumped element capacitor. A single loop as a receiving resonator with a low Specific Absorption Rate (SAR), is designed to allow greater transmit power to be used in comparison to previous work, whilst satisfying the relevant standards relating to human safety. A rectifier circuit is also designed to convert the received RF energy into useable electrical energy allowing the realisation of the proposed WPT system. In a nutshell, this thesis places emphasis on solutions to overcome challenges relating to the use of MRC-WPT for IMDs. An efficient near-field WPT system for such devices is successfully demonstrated and should have profound significance in pushing forward the future development of this topic.

9.Improving the Performance of Halide Perovskite Thin Film through Pb(II)-Coordination Chemistry

Author:Tianhao Yan 2021
Abstract:Recently, organo-lead-halide perovskite solar cells have attracted growing and wide attention due to their remarkable photoelectric properties, low cost and ease of fabrication. However, the development of perovskite solar cells is still limited by several factors, such as strict fabrication conditions, low stability, small active area and poor reproducibility etc. The nature of perovskite film formation is argued as a process of a series of chemical reactions and crystallization processes, where the Pb(II) coordination chemistry involves in. We thus set out to improve the performance of perovskite films from the point of Pb(II) coordination chemistry.   By using the solvent engineering strategy, a series of inverted perovskite solar cells (PSCs) with a device structure as ITO/PEDOT: PSS/CH3NH3PbI3-xClx/PCBM/Al via one-step coating were fabricated had been successfully fabricated using simple one-step method from the solutions of chloric precursors in the mixtures of N, N-dimethylformamide (DMF) and -butyrolactone (GBL) at different ratios. The highest average PCE (power conversion efficiency) of 11.251 % was achieved when the solvent with DMF : GBL = 3.5 : 6.5 (v : v) was used for precursor preparation, while the average PCEs for the devices from precursors with pure GBL and DMF as the solvent were 8.600 % and 8.082 %, respectively. The detailed SEM (scanning electron microscope), XRD (X-ray Diffraction) and UV-Vis (UV-Visible spectroscopy) studies showed that the great increase of the PCE of the PSC was led by the apparent quality improvement of the perovskite film, owing to the fast nucleation and the slow crystal growth introduced by the dual solvent system. Plausible formation mechanisms of perovskite films from different solvents were proposed.   The film formation processes from different precursors were also studied, and several intermediates in the perovskite film formation processes were isolated and structurally characterized. The single crystals were successfully grew and the crystal structures of MAPbI3·DMF, MAPbI2Cl·DMF and MAPb1.5I3Br·DMF were solved. The crystal structures of MAPbI2Cl·DMF and MAPb1.5I3Br·DMF were identified for the first time. Meanwhile, the recrystallization process of MAPbI2Cl·DMF were founded that happened before spin-coating or at the early stage of annealing was the key to produce the perovskite film with high crystallinity and high orientation from chloride precursors. Based on the structures and chemical properties of the intermediates, the version of chemical reactions and mechanisms for perovskite film formation with different precursors were proposed.   In addition, several groups of PSCs from lead acetate trihydrate-based precursors were constructed by varying hydrate number, finely tuning spin-coating method, and applying DMSO as additive. It was found that the H2O molecules in precursors can greatly improve the film coverage, and the pre-heating method can avoid the low crystallinity while ensuring the high coverage of perovskite thin films. In addition, the adding of DMSO as additive influenced the formation kinetics of perovskite films and improved the reproducibility of devices. As a result, PSCs with PCE of 15.714 % had been achieved.

10.Supply chain resilience development and risk management in volatile environments

Author:Yu Han 2022
Abstract:Supply chains are operating in increasingly volatile business environments. Supply chain resilience development has become the core task of supply chain risk management for companies to maintain effectiveness and efficiency. To achieve this, research has predominantly focused on looking for approaches to improve companies’ ability to resist, respond to, and recover from influences of disruptive events. Fundamentally, developing resilience for risk management considers three phases, including pre-, during, and post-disruption phases. Specifically, supply chain resilience research primarily investigates supply chain readiness, responsiveness, and recovery. Among the main streams of supply chain resilience research, studies predominantly focus on the conceptual development of various capabilities for each phase. Business practices only serve to elucidate the definition of supply chain resilience and capabilities. Thus, the extant research area lacks sufficient empirical understanding of how resilience is achieved in a volatile environment in industrial sectors. To address the gaps in the literature, three papers have been developed, including one literature review paper (conceptual paper) and two empirical papers. Companies from the manufacturing industry, especially the machinery sector, were selected for investigation in this research. This study investigates different aspects of supply chain resilience for efficient risk management in uncertain and volatile environments, such as international sourcing strategies to respond to a global pandemic and the role of government interventions in achieving collaborative relationships during a pandemic. Therefore, this research makes a significant theoretical contribution to the supply chain resilience and risk management literature, providing insightful, practical implications for supply chains operating in the turbulent business environment.

11.Multimodal Approach for Big Data Analytics and Applications

Author:Gautam Pal 2021
Abstract:The thesis presents multimodal conceptual frameworks and their applications in improving the robustness and the performance of big data analytics through cross-modal interaction or integration. A joint interpretation of several knowledge renderings such as stream, batch, linguistics, visuals and metadata creates a unified view that can provide a more accurate and holistic approach to data analytics compared to a single standalone knowledge base. Novel approaches in the thesis involve integrating multimodal framework with state-of-the-art computational models for big data, cloud computing, natural language processing, image processing, video processing, and contextual metadata. The integration of these disparate fields has the potential to improve computational tools and techniques dramatically. Thus, the contributions place multimodality at the forefront of big data analytics; the research aims at mapping and under- standing multimodal correspondence between different modalities. The primary contribution of the thesis is the Multimodal Analytics Framework (MAF), a collaborative ensemble framework for stream and batch processing along with cues from multiple input modalities like language, visuals and metadata to combine benefits from both low-latency and high-throughput. The framework is a five-step process: Data ingestion. As a first step towards Big Data analytics, a high velocity, fault-tolerant streaming data acquisition pipeline is proposed through a distributed big data setup, followed by mining and searching patterns in it while data is still in transit. The data ingestion methods are demonstrated using Hadoop ecosystem tools like Kafka and Flume as sample implementations. Decision making on the ingested data to use the best-fit tools and methods. In Big Data Analytics, the primary challenges often remain in processing heterogeneous data pools with a one-method-fits all approach. The research introduces a decision-making system to select the best-fit solutions for the incoming data stream. This is the second step towards building a data processing pipeline presented in the thesis. The decision-making system introduces a Fuzzy Graph-based method to provide real-time and offline decision-making. Lifelong incremental machine learning. In the third step, the thesis describes a Lifelong Learning model at the processing layer of the analytical pipeline, following the data acquisition and decision making at step two for downstream processing. Lifelong learning iteratively increments the training model using a proposed Multi-agent Lambda Architecture (MALA), a collaborative ensemble architecture between the stream and batch data. As part of the proposed MAF, MALA is one of the primary contributions of the research.The work introduces a general-purpose and comprehensive approach in hybrid learning of batch and stream processing to achieve lifelong learning objectives. Improving machine learning results through ensemble learning. As an extension of the Lifelong Learning model, the thesis proposes a boosting based Ensemble method as the fourth step of the framework, improving lifelong learning results by reducing the learning error in each iteration of a streaming window. The strategy is to incrementally boost the learning accuracy on each iterating mini-batch, enabling the model to accumulate knowledge faster. The base learners adapt more quickly in smaller intervals of a sliding window, improving the machine learning accuracy rate by countering the concept drift. Cross-modal integration between text, image, video and metadata for more comprehensive data coverage than a text-only dataset. The final contribution of this thesis is a new multimodal method where three different modalities: text, visuals (image and video) and metadata, are intertwined along with real-time and batch data for more comprehensive input data coverage than text-only data. The model is validated through a detailed case study on the contemporary and relevant topic of the COVID-19 pandemic. While the remainder of the thesis deals with text-only input, the COVID-19 dataset analyzes both textual and visual information in integration.  

12.Machine learning enabled genetic and functional interpretation of the epitranscriptome

Author:Bowen Song 2022
Abstract:Increasing evidence has suggested that RNA modifications regulate many important biological processes. To date, more than 170 types of post-transcriptional RNA modifications have been discovered. With recent advances in sequencing techniques, tens of thousands of modification sites are identified in a typical high-throughput experiment, posing a key challenge to distinguish the functional modified sites from the remaining ‘passenger’ (or ‘silent’) sites. To ensure that the massive epitranscriptome datasets are properly taken advantage of, annotated, and shared, bioinformatics solutions are developed with various focuses. In this thesis, we first described a comparative conservation analysis of the human and mouse m6A epitranscriptome at single-site resolution. A novel scoring framework, ConsRM, was devised to quantitatively measure the degree of conservation of individual m6A sites. ConsRM integrates multiple information sources and a positive-unlabeled learning framework, which integrated genomic and sequence features to trace subtle hints of epitranscriptome layer conservation. With a series of validation experiments in mouse, fly and zebrafish, we showed that ConsRM outperformed well-adopted conservation scores (phastCons and phyloP) in distinguishing the conserved and non-conserved m6A sites. Additionally, the m6A sites with a higher ConsRM score are more likely to be functionally important. To further unveil the functional epitranscriptome, we investigated the potential influence of genetic factors on epitranscriptome disturbance. Recent studies have found close associations between RNA modifications and multiple pathophysiological disorders, the precise identification and large-scale prediction of disease-related modification sites can truly contribute to understanding potential disease mechanisms. Consequently, we developed a computational pipeline to systemically identify RNA modification-associated variants and their affected modification regions, with emphasis on their disease- and trait-associations. Furthermore, we described the next research considering the dynamics of RNA methylome across different tissues by elucidating the tissue-specific impact of the somatic variant on m6A methylation. The TCGA cancer mutations (derived from 27 cancer types) that may lead to the gain or loss of m6A sites in corresponding cancer-originating tissues were systemically evaluated and collected. Token together, the proposed bioinformatics pipelines and databases should serve as useful resources for functional discrimination and annotation of the massive epitranscriptome data, with implications for the potential disease mechanisms functioning through epitranscriptome layer.

13.Catalytic Upgrading of Biomass Fast Pyrolysis Vapours: Impact of Red Mud, Metal Oxides and Composites

Author:Jyoti Gupta 2020
Abstract:The overall objective of this work is to investigate the effect of industrial waste and low-cost material as catalysts in fast pyrolysis products upgrading and to obtain valuable chemicals. Red mud, a by-product of the Bayer process in the aluminium industry, was catalysed with beechwood for the in-situ upgrading of fast pyrolysis vapour products. It was revealed that the catalysis of beechwood with thermally pre-treated red mud enhanced the vapour upgrading effect. Individual oxides (α-Al2O3, Fe2O3, SiO2, and TiO2), the main constituents of red mud were also tested for the identification of their individual impact on the upgrading process. A biomass/catalyst weight ratio (wt. ratio) of 1:4, on the basis of relative peak area, showed the strongest effect on the product distribution. Red mud was found to reduce phenolic compounds and promote the formation of cellulose- and hemicellulose-derived furfurals and hemicellulose-derived acetic acid, which can be used for the production of a broad range of chemicals. α-Al2O3 and Fe2O3 reduced the relative yield of phenols as well, whereas the formation of furfurals was promoted by Fe2O3 and TiO2. SiO2 showed a negligible effect on fast pyrolysis vapours. The impact of catalysts on the product distribution was discussed for phenols, furfurals, and acids, for which the strongest effects were observed. This work also investigated the activity of CaO as a catalyst in the aliphatic and cyclic ketonisation reaction and depletion of phenolic compounds in the catalytic fast pyrolysis of OW. Three basic aspects were investigated: The heterogeneous character of CaO in different wt. ratios for catalytic fast pyrolysis of OW, the stability of the catalyst by re-utilisation in successive runs, and the role of H2O and CO2 in the deterioration of the catalytic performance by contact with atmospheric air. CaO catalyst promoted the selectivity for ketonisation reactions with acetone and cyclic ketones formation, whereas most of phenolic compounds were declined. The characterisation by X-ray diffraction (XRD) and Fourier transform-infrared (FT-IR) spectroscopies led to the conclusion that CaO chemisorbs significant amount of H2O and CO2 by contact with room air. It was demonstrated that CO2 was the main deactivating agent, whereas the negative effect of water was less important. The catalyst reused several runs without significant deactivation. The activation by outgassing at temperatures 950 oC was required to revert the CO2 poisoning. In order to investigate the impact of crystal structures in fast pyrolysis products upgrading, five single-phase compounds (CaTiO3, CaSiO3, Ca2Fe2O5, Ca2FeAlO5 and CaAl2O4) were synthesised and employed for catalytic upgrading of biomass fast pyrolysis vapours. All compounds did not show strong catalytic activity on the transformation of undesirable compounds into valuable compounds. However, their impact were seen in decreasing the overall yield of pyrolysis products. Finally, two types of composites (CaTiO3/CaO and Ca2Fe2O5/CaO) in different mol % were synthesised to check synergy and to prevent sintering over multiple carbonations and decarbonisation cycles on CaO catalyst and results were compared with the catalytic capability of pure CaO. It was found that synergy of CaTiO3 with CaO did not impact the catalytic performance of CaO. Besides, CaTiO3/CaO composites were found to further asssist the CaO catalytic activity in the selectivity of ketonisation reactions for acetone and cyclic ketones formation. On the contrary, the selectivity for ketonisation reactions for acetone formation decreased with incomplete conversion of acetic acid in Ca2Fe2O5/CaO composites. Furfural transformation and phenols depletion were also impacted over the presence of Ca2Fe2O5 with CaO.

14.Discrete element modelling of concrete behaviour

Author:Sanmouga Marooden 2018
Abstract:This work presents the study of a three-dimensional (3D) simulation of the concrete behaviour in a uni-axial compressive test and flexural test using discrete element modelling (DEM). The proposed numerical models are namely, unreinforced cylindrical concrete under a uni-axial compressive test, unreinforced concrete beam under three-point flexural test and lastly, steel reinforced concrete beam under four-point flexural test. Those models were built up with fish programming language and python programming language (see Appendix A1 for the code created) and run into a computer program namely Particle flow code (PFC 3D). The main aim of this paper is to validate those numerical models developed and to study the cracking initiation and failure process in order to understand the fracture behaviour of concrete. The particles were distributed using an algorithm that is based on the sieve test analysis. The parameters were set up in order to validate the numerical model with the experimental result. It was observed that all the three models developed show a strong correlation with the laboratory experiment in term of stress-strain response, load-displacement response, crack pattern and macroscopic cracks development. Once, the bond between the spheres is broken, it leads to the formation of microscopic cracks which is not visible in laboratory experiment. DEM can help to identify which part is more prone to the evolution of microscopic cracks to macroscopic cracks under the discrete fracture network. In addition to, the rosette plot allows identifying the orientation that leads to a significant amount of micro cracks which is essential for designing structures. From the observation recorded in this research, it was observed that DEM is capable to reproduce concrete behaviour both quantitatively and qualitatively. It is also possible to measure the strain energy stored in the linear contact bond and parallel bond. At yield point which corresponds to the maximum amount of microcracks recorded, that strain energy is released in the form of kinetic energy, frictional slip energy, energy of dashpot, local damping. This can be extended further to compute fracture energy in the future work. Hence, it can be concluded DEM can be used to study the heterogeneous nature of concrete and as well as randomness nature of the fracturing of concrete structure.

15.Molecular ecological characterization of a honey bee ectoparasitic mite, Tropilaelaps mercedesae

Author:Xiaofeng DONG 2016
Abstract:Tropilaelaps mercedesae (small mite) is one of two major honey bee ectoparasitic mite species responsible for the colony losses of Apis mellifera in Asia. Although T. mercedesae mites are still restricted in Asia (except Japan), they may diffuse all over the world due to the ever-increasing global trade of live honey bees (ex. Varroa destructor). Understanding the ecological characteristics of T. mercedesae at molecular level could potentially result in improving the management and control programs. However, molecular and genomic characterization of T. mercedesae remains poorly studied, and even no genes have been deposited in Genbank to date. Therefore, I conducted T. mercedesae genome and transcriptome sequencing. By comparing T. mercedesae genome with other arthropods, I have gained new insights into evolution of Parasitiformes and the evolutionary changes associated with specific habitats and life history of honey bee ectoparasitic mite that could potentially improve the control programs of T. mercedesae. Finally, characterization of T. mercedesae transient receptor potential channel, subfamily A, member 1 (TmTRPA1) would also help us to develop a novel control method for T. mercedesae.

16.Stochastic Behavior, Term Structure and Margin Adequacy in VIX Futures Market

Author:Chen Yang 2022
Abstract:In the 2008 financial crisis, many investors endured heavy financial losses caused by sharply increased volatility. The growing demand of hedging volatility via trading volatility derivatives contributes to the rapid development of VIX futures market in recent years. In this high-volatility market, initial margin is the first defense for exchanges to fight against potential default losses. Adequate initial margin can cover most losses caused by unexpected price movements, while the liquidity of market may be squeezed in the presence of high margins. Therefore, a trade-off need be balanced for exchanges in setting appropriate margins. The primary aim of this thesis is to develop an option-based framework of margin setting, standards, and evaluation in VIX futures market. I further conduct a series of comprehensive empirical studies on stochastic behavior and term structure, and margin adequacy of VIX futures. This research consists of three separate studies with distinct focuses in each. In the first study, I investigate jumps in VIX futures market, as jumps usually result in extreme stochastic behavior in VIX futures prices, which is the primary concern of both investors and regulators. I provide the empirical evidence of jumps in this market by employing three non-parametric methods for statistical testing, and further propose a nonparametric framework which is rooted in the generalized moment method (GMM) and the extreme value theory (EVT) in order to investigate the properties of jumps in VIX futures prices. As such, the magnitudes of risk premiums are quantified at price and variation levels by incorporating both VIX options and futures, showing that relatively higher premiums are paid by investors against large upward jumps in VIX futures prices, commentated with smaller premiums gaining from the variations of VIX futures. In the second study, I propose multi-factor models for VIX options and futures to study their performance in pricing VIX options and margin management on VIX futures, equipped with a set of hump-shaped volatility functions. A procedure of the generalized moment method (GMM) is developed to estimate these models by incorporating both the “forward-looking” information from VIX options and the “backward looking” information from VIX futures. The empirical results suggest that the three-factor model outperforms other candidate models in pricing VIX options, well characterize the stochastic behavior and capture the dynamics of the term structure of VIX futures. Moreover, the option-incorporated VaR and ES risk estimates can Granger-cause initial margins imposed by the Chicago Futures Exchange (CFE). In the third study, I develop an option-based framework to study margin setting, standards, and evaluation for VIX futures. More specially, the payoffs involving the trading (long/short) positions in VIX futures are converted into the ones to barrier options with moderate assumptions. By virtue of this idea, the adapted framework transforms the tier initial margin requirements to the prices of corresponding barrier options for long and short positions. I hence propose two standards of margin setting (the zero-NPV standard and zero-default-loss one) to examine adequacy of initial margins of VIX futures. These standards eventually deliver the bounds on initial margins of VIX futures, which are estimated in the risk-neutral measure. The empirical results suggest that the tier margins imposed by the CFE are sufficient to reject positive net present value (NPV) of futures positions but still not high enough to cover any possible default losses caused by the fluctuations in VIX futures prices. Furthermore, I explore the appropriate margin bounds for VIX futures with various maturities. The proposed lower and upper bounds respectively indicate the minimum and maximum required margin standards for VIX futures. The margin bounds are evaluated from the perspectives of prudentiality and opportunity cost in the physical measure, showing that the more attentions are paid to capital burden on investors in the VIX futures market, despite its volatile nature.

17.Crisis Transmitting Effects Detection and Early Warning Systems Development for China's Financial Markets

Author:Peiwan Wang 2022
Abstract:In the background of China’s economic development mode being focused the worldwide attention, there is a growing trend to study the risk transmission pattern and the crisis forecasting mechanism for China’s financial markets by domestic and global academics. The study progress, however, is observed to be affected by two gaping research problems: 1) few studies construct comparative contagion models and integrated crisis forecasting systems for China’s financial markets and 2) current econometric models hired to the risk spreading effects detection and the financial crisis forecasts are yet deterministically investigated in terms of the effectiveness on China. To fill the gaps, this research proposes two hybrid contagion models and prototypes the early warning systems with motivations of first analyzing the crisis linkages and transmission channels across domestic markets in hierarchical frameworks, and then predicting the market turbulence by integrating the crisis identifying techniques and time-dependent deep learning neuron networks. To accomplish our aims, the full project is progressed in phases by solving four technical challenges that portray two literature gaps of A) the crisis identification on the basis of price volatility state distinction, B) the decomposition for multivariate correlated patterns to infer the interdependence structure and risk spillover dynamics respectively, C) the real-time warning signals generation in comparison of between traditional and stylized predictive models and D) the contagion information fusion in the EWS frameworks to distinguish the leading indicators from between internal macroeconomic factors and external risk transmitters in statistical validation metrics. The research mainly contributes to the comparative analysis on financial contagion effects detection and market turbulence prediction through the hybrid model innovations for CM and EWS development, and meanwhile brings practical significance to improve the risk management in investing activities and support the crisis prevention in policy-making. In addition, the model experimented results corroborate the China-characterized mode on risk transmissions and crisis warnings that 1) the stocks and real estate markets are verified to play the central role among risk transmitters, while the managed floating foreign exchange rate and the non-fully liberalized bond market are peripheral during the crisis; and 2) the all-round opening up policy increases the possibility of domestic security markets being exposed to external risk factors, especially relating to the cash flows, energy commodities and precious metals.

18.Visual Attention Mechanism in Deep Learning and Its Applications

Author:Shiyang Yan 2018
Abstract:Recently, in computer vision, a branch of machine learning, called deep learning, has attracted high attention due to its superior performance in various computer vision tasks such as image classification, object detection, semantic segmentation, action recognition and image description generation. Deep learning aims at discovering multiple levels of distributed representations, which have been validated to be discriminatively powerful in many tasks. Visual attention is an ability of the vision system to selectively focus on the salient and relevant features in a visual scene. The core objective of visual attention is to achieve the least possible amount of visual information to be processed to solve the complex high-level tasks, e.g., object recognition, which can lead the whole vision process to become effective. The visual attention is not a new topic which has been addressed in the conventional computer vision algorithms for many years. The development and deployment of visual attention in deep learning algorithms are of vital importance since the visual attention mechanism matches well with the human visual system and also shows an improving effect in many real-world applications. This thesis is on the visual attention in deep learning, starting from the recent progress in visual attention mechanism, followed by several contributions on the visual attention mechanism targeting at diverse applications in computer vision, which include the action recognition from still images, action recognition from videos and image description generation. Firstly, the soft attention mechanism, which was initially proposed to combine with Recurrent Neural Networks (RNNs), especially the Long Short-term Memories (LSTMs), ii Visual Attention Mechanism in Deep Learning and Its Applications Shiyang Yan was applied in image description generation. In this thesis, instead, as one contribution to the visual attention mechanism, the soft attention mechanism is proposed to directly plug into the convolutional neural networks for the task of action recognition from still images. Specifically, a multi-branch attention network is proposed to capture the object that the human is intereating with and the scene in which the action is performing. The soft attention mechanism applying in this task plays a significant role in capturing multi-type contextual information during recognition. Also, the proposed model can be applied in two experimental settings: with and without the bounding box of the person. The experimental results show that the proposed networks achieved state-of-the-art performance on several benchmark datasets. For the action recognition from videos, our contribution is twofold: firstly, the hard at- tention mechanism, which selects a single part of features during recognition, is essentially a discrete unit in a neural network. This hard attention mechanism shows superior capacity in discriminating the critical information/features for the task of action recognition from videos, but is often with high variance during training, as it employs the REINFORCE algorithm as its gradient estimator. Hence, this brought another critical research question, i.e., the gradient estimation of the discrete unit in a neural network. In this thesis, a Gumbel-softmax gradient estimator is applied to achieve this goal, with much lower vari- ance and more stable training. Secondly, to learn a hierarchical and multi-scale structure for the multi-layer RNN model, we embed discrete gates to control the information be- tween each layer of the RNNs. To make the model differentiable, instead of using the REINFORCE-like algorithm, we propose to use Gumbel-sigmoid to estimate the gradient of these discrete gates. For the task of image captioning, there are two main contributions in this thesis: pri- marily, the visual attention mechanism can not only be used to reason on the global image features but also plays a vital role in the selection of relevant features from the fine-grained objects appear in the image. To form a more comprehensive image representation, as a iii Visual Attention Mechanism in Deep Learning and Its Applications Shiyang Yan contribution to the encoder network for image captioning, a new hierarchical attention network is proposed to fuse the global image and local object features through the con- struction of a hierarchical attention structure, to better the visual representation for the image captioning. Secondly, to solve an inherent problem called exposure-biased issue of the RNN-based language decoder commonly used in image captioning, instead of only relying on the supervised training scheme, an adversarial training-based policy gradient op- timisation algorithm is proposed to train the networks for image captioning, with improved results on the evaluation metrics. In conclusion, comprehensive research has been carried out for the visual attention mechanism in deep learning and its applications, which include action recognition and im- age description generation. Related research topics have also been discussed, for example, the gradient estimation of the discrete units and the solution to the exposure-biased issue in the RNN-based language decoder. For the action recognition and image captioning, this thesis presents several contributions which proved to be effective in improving existing methods.

20.The Effect of China's Post-1994 Fiscal Structure on Social Welfare

Author:Yidan Liu 2022
Abstract: Many researchers have argued that the welfare effect of fiscal decentralisation (FD) theory is not clearly determined outside of democratic countries. To provide more empirical evidence for the welfare effect of FD, this study focuses on the effect of China’s FD on social welfare in the post-1994 fiscal structure. Because the 1994 tax reform effectively resulted in fiscal recentralisation on the revenue side and fiscal decentralisation on the expenditure side, I construct indicators of revenue decentralisation and expenditure decentralisation and include both sets of indicators to ascertain the marginal effect of each under China’s post-1994 fiscal structure. This study consists of three chapters. In the first chapter, I examine the impact of the post-1994 fiscal structure on provincial environmental spending from 1994 to 2017. The findings show that expenditure decentralisation caused a reduction in provincial governments’ environmental spending. Although revenue decentralisation led to an increase in provincial environmental spending, its effect on the latter was not as significant as the effect of expenditure decentralisation. On balance, it can be inferred that, given China’s official promotion system and other institutional and policy factors, its post-1994 fiscal structure has had a negative effect on provincial environmental spending. In the second chapter, I explore the impact of fiscal structure on the urban-rural income gap in China after 1994, using data from 2007-2018. The findings suggest that the post-1994 fiscal structure significantly reduced the urban-rural income gap. This result is reinforced when the influence of FD on the rural-urban income gap is examined via its impact on public investment in education, healthcare and social security. The third chapter focuses on how China’s post-1994 fiscal structure has affected housing affordability in China, based on panel data from 1999 to 2017. Due to the spectacular rise of housing prices in major cities, housing has become increasingly unaffordable in China. This chapter examines the role of the fiscal structure, by which most city governments rely heavily on revenues from land-lease to help finance their many fiscal responsibilities as well as expenditure on pro-growth projects important to city officials’ promotion under China’s GDP-focused promotion system. The findings show that China’s post-1994 fiscal structure has impeded the effectiveness of its affordable housing policies. This is because the fiscal structure, together with China’s promotion system, has been an important factor behind the sharp rise in housing prices, on the one hand, and the lack of investment in affordable housing programmes, on the other. Considering that the current level of welfares can be heavily determined by those past level, the dynamic effects in these processes have been modelled by including the lagged welfare-related dependent variables on the right-hand side of the regression equations. A system generalised method of moments (Sys-GMM) estimator has been used in this research to solve the autocorrelation problem caused by the presence of lagged dependent variables while considering the endogeneity of the FD variable. In summary, the findings suggest that China’s post-1994 fiscal structure had a mixed effect on social welfare. It had a negative effect on environmental spending and housing affordability, but a positive effect on reducing urban-rural income inequality. The conclusions from this research have implications for the current debate on China’s fiscal framework and the design of future fiscal reforms. The results suggest that Chinese policymakers should consider redistributing fiscal responsibilities to match the revenue and expenditure responsibilities of all levels of government. Furthermore, the results suggest that future fiscal reforms should not only involve fiscal policy. Given that China’s official promotion system is influencing the effectiveness of sub-national officials’ decision-making, policymakers should consider adjusting the assessment criteria of the official promotion system to incentivise local officials to provide higher levels of social welfare.  

21.The role of LytR-CpsA-Psr proteins in cell envelope biogenesis of Mycobacterium smegmatis

Author:Abhipsa Sahu 2020
Abstract:Tuberculosis infection is one of the leading causes of mortality worldwide and is caused by Mycobacterium tuberculosis (Mtb). With an upsurge of multidrug-resistant tuberculosis, it is a global threat. Therefore, development of new drugs need immediate attention, and this needs identification of potential drug targets. The cell envelope of mycobacteria is one such attractive drug target owing to its role in maintaining the structural integrity and pathogenicity of the bacterium. The LytR-CpsA-Psr (LCP) family of proteins in Mycobacterium spp. have been shown to catalyze the coupling of arabinogalactan and peptidoglycan and possess pyrophosphatase activity. The four LCP protein homologues present in Mycobacterium smegmatis (Msmeg), MSMEG_0107, MSMEG_1824, MSMEG_5775 and MSMEG_6421, have not been extensively investigated with the focus on the existence and interplay of multiple LCP proteins. In this study with this non-pathogenic model organism, all four LCP homologues were shown to possess pyrophosphatase activity, with a significant higher activity displayed by MSMEG_0107 and MSMEG_5775. In order to further study the role of the LCP proteins on the physiology of the bacterium, single and double deletion strains lacking of the three non-essential lcp genes were created along with the respective complemented strains. All the generated mutants showed different phenotypes in the different assays, but usually not very severe. However, the double-deletion lcp mutant, ΔΔ(0107+5775) was the most affected mutant strain and displayed a disrupted cell envelope as evident from deprived growth rate, slower cellular aggregation, diminished biofilm formation on air-liquid interface, altered morphology, as well as an increased susceptibility to surface detergent, lysozyme and a wide range of antibiotics. Thus, the loss of both MSMEG_0107 and MSMEG_5775 exhibited profound effects on the mycobacterial cell envelope, and therefore could be further investigated as a possible combined drug target by extending these studies in Mtb. A novel approach in this study is the detection of exposed mycobacterial Galf moieties of arabinogalactan by EB-A2 monoclonal antibody, in the double lcp deletion mutant ΔΔ(0107+5775). Transcription profiling of all the lcp genes in the wild type strain and the mutants exhibited differential expression of these genes under both standard and stress conditions. A loss of MSMEG_5775 resulted in an upregulation of the other three lcp genes in comparison to the wild type strain under standard conditions. Under both acid and lysozyme stress, the loss of MSMEG_5775 downregulated all other lcp genes while loss of MSMEG_6421 upregulated these genes. Lastly, an in silico approach led to the identification of putative transcriptional factors in mycobacteria and related species which could be further investigated and experimentally confirmed. This study helped to understand the role of the lcp homologues in Msmeg better. From the differential expression studies, role of regulator(s) might be a significant approach to understand this family of proteins much better.

22.Preparation and investigation of anode materials with highly conductive materials for high-performance lithium-ion batteries

Author:Yinchao Zhao 2020
Abstract:With the increased demand for developing energy storage technologies, lithium-ion batteries have been considered as one of the most promising candidates due to its high energy density, excellent cyclic performance, and environmental benignity. Indeed, extensive applications of lithium-ion batteries are witnessed in the market, for example, in portable electronic equipment. However, the commercialized graphite anodes for lithium-ion batteries exhibiting low theoretical specific capacity is far from meeting the tremendous demands created by the fast-growing market. Therefore, enormous efforts have been devoted to developing desirable electrode materials with better recyclability and advanced capacity for next-generation lithium-ion batteries. Although alloy anode materials like silicon have the highest gravimetric and volumetric capacity, its huge volume change and low electron and ions conductivity still hinder the broad application in other fields, such as large-scale energy storage systems. Similar challenges also impede the wide implementation of conversion materials in Li-ion batteries. This work is focused on employing different highly conductive materials to improve the electrical conductivity of the entire electrode. At the same time, the formation of the conductive framework is beneficial to accommodate the substantial volume change of the active materials. In Chapter 3, copper nanowires and multi-wall carbon nanotubes coated on the surface of Cu foils built a porous substrate to support the active materials. Silicon was deposited on the porous substrate by the template of copper nanowires and multi-wall carbon nanotubes. The formation of copper nanowires/silicon and multi-wall carbon nanotubes/silicon core-shell structures intrinsically reduces the volume expansion of active materials. Meanwhile, the poles created by the intertwined copper nanowires and multi-wall carbon nanotubes further accommodate the stress from volume change. In addition, the copper nanowires/silicon and multi-wall carbon nanotubes/silicon core-shell structure provide the highly efficient electrons and Li+ diffusion pathways. As a result, we have demonstrated that multi-wall carbon nanotubes/copper nanowires/silicon delivers a high specific capacity of 1845 mAh g-1 in a half cell at a current density of 3.5 A g-1 after 180 cycles with a capacity retention of 85.1 %. In Chapter 4, a free-standing silicon-based anode was developed by preparing a three-dimensional copper nanowires/silicon nanoparticles@carbon composite using freeze-drying. Silicon nanoparticles were uniformly attached along with the copper nanowires, which was reinforced by the carbon coatings. The three-dimensional conductive structure allows the silicon nanoparticles to distribute evenly as well as enhance the electrical and ionic conductivity of the whole electrode. Similarly, considerable interspace produced by the three-dimensional structure can relieve the stress produced by the vast volume expansion of silicon nanoparticles, which is also restricted by the carbon coating layers during the charge and discharge processes. Moreover, the outer layers strengthen the stability of the three-dimensional framework and the contact between the copper nanowires and silicon nanoparticles. The electrochemical performance of copper nanowires/silicon nanoparticles@carbon composite electrode has been measured, which exhibits excellent cycling performance. In Chapter 5, a new highly conductive material, MXene nanosheets, was introduced to promote the electrochemical performance in lithium-ion batteries. In this chapter, the cobalt oxides were chosen as the active material for its controllable and facile synthesis methods. Meanwhile, cobalt oxides, one of the conversion materials as anodes for lithium-ion batteries, face similar issues with silicon. Therefore, an anode involving cobalt oxides nanoparticles mixed with MXene nanosheets on Ni foams has been developed. Small-size cobalt oxides nanoparticles were uniformly distributed within the MXene nanosheets leading to high lithium ions and electrons transmission efficiency, as well as preventing restacking of MXene nanosheets and colossal volume change of the cobalt oxides nanoparticles. As shown in Chapter 5, cobalt oxides /MXene composite electrode remains a stable capacity of 307 mAh g-1 after 1000 cycles when the current density approaches 5 C, which indicates the enormous potential of cobalt oxides/MXene composite as an anode for the high-performance lithium-ion batteries.

23.Essays in Quantitative Investments

Author:Yurun Yang 2018
Abstract:This thesis studies the characteristics of Chinese futures markets and the quantitative investment strategies. The main objective of this thesis is to provide a comprehensive analysis on the performance of quantitative investment strategies in the Chinese market. Furthermore, with an econometric analysis, the stylised facts of the Chinese futures markets are documented. Extensive backtesting results on the performance of momentum, reversal and pairs trading type strategies are provided. In the case of pairs trading type strategies, risk and return relationship is characterised by the length of the maximum holding periods, and thus re ected in the maximum drawdown risk. In line with the increasing holding periods, the pro tability of pairs trading increases over longer holding periods. Therefore, the abnormal returns from pairs trading in the Chinese futures market do not necessarily re ect market ine ciency. Momentum and reversal strategies are compared by employing both high- and low-frequency time series with precise estimation of transaction costs. The comparison of momentum and reversal investment strategies at the intra- and inter-day scales displays that the portfolio rebalancing frequency signi cantly impacts the pro tability of such strategies. Complementarily, the excess returns of inter-day momentum trading with the inclusion of precise estimates of transaction costs re ect that quantitative investment strategies consistently produce abnormal pro ts in the Chinese commodity futures markets. However, from a risk-adjusted view, the returns are obtained only by bearing additional drawdown risks. Finally, this thesis suggests that investor should choose quantitative trading strategies according to the investment horizon, tolerance for maximum drawdown and portfolio rebalancing costs.

24.Estimation of Radio Frequency Impairments and Channels for Multi-Carrier 5G and Beyond 5G Systems

Author:Yujie Liu 2020
Abstract:Multi-carrier techniques play an important role in the fifth generation (5G) and beyond 5G (B5G) wireless communication systems, as they can support high data rate communications and exhibit high resilience to frequency selective fading. However, the presence of radio frequency (RF) impairments, such as carrier frequency offset (CFO), in-phase/quadrature-phase (IQ) imbalance, hinder the effectiveness of multi-carrier techniques. Thus, the estimation of RF impairments and channel are very essential. In this thesis, RF impairments(s) and channel(s), and their estimation together are considered for various multi-carrier 5G and B5G systems. This thesis consists of four main contributions as follows.  First, a joint multi-time of arrival (TOA) and multi-CFO estimation scheme is proposed for multi-user orthogonal frequency division multiplexing (OFDM) systems, where TOA is a key component of channel. With a carefully designed pilot, U TOAs and U CFOs of U users are separated jointly, dividing a complex 2U-dimensional estimation problem into 2U low-complexity one-dimensional estimation problems. Two CFO estimation approaches, including a low-complexity closed-form solution and a high-accuracy null-subcarrier assisted approach, are proposed to estimate the integer and fractional parts of each CFO as a whole. Each TOA estimate is robust against CFO by means of the features of the inter-carrier interference (ICI) matrix. Cramer-Rao lower bounds (CRLBs) of multi-TOA and mutli-CFO estimation are derived for multi-user OFDM systems. Extensive simulation results confirm the effectiveness of the proposed scheme.  Second, an iterative semi-blind (ISB) receiver structure is proposed for short-frame full-duplex (FD) OFDM systems with CFO. An equivalent system model with CFO included implicitly is first derived. A subspace-based blind channel estimation is proposed for the initial stage, followed by a single pilot assisted CFO estimation and channel ambiguities elimination. Then, channel and CFO are refined iteratively. The integer and fractional parts of CFO in the full range are extracted as a whole and in closed-form at each iteration. The proposed ISB receiver, with halved training overhead, demonstrates superior performances than the existing methods. CRLBs are derived to verify the effectiveness of the proposed receiver structure. It also demonstrates fast convergence speed.  Third, a robust semi-blind CFO and channel estimation scheme is proposed for generalised frequency division multiplexing (GFDM) systems. Based on an equivalent system model with CFO included implicitly, initial blind channel estimation is performed by subspace. Then, full-range CFO and channel ambiguity are estimated consecutively utilising a small number of nulls and pilots in a single subsymbol, respectively. Both CFO and channel estimates demonstrate high robustness against ICI and inter-symbol interference (ISI) caused by the nonorthogonal filters of GFDM. Simulation results verify that the bit error rate (BER) performance of the proposed scheme approaches the ideal case with perfect CFO and channel estimations.    Last but not least, a semi-blind joint estimation scheme of multiple channels, multiple CFOs and IQ imbalance is proposed for generalised frequency division multiple access (GFDMA) systems, with no constraints on carrier assignment scheme, modulation type, cyclic prefix length and symmetry of IQ imbalance. By means of subspace approach, CFOs and channels of U users are first separated into U groups. For each individual group, the CFO is estimated by minimising the smallest eigenvalue, whose corresponding eigenvector is utilised to determine channel. Then, IQ imbalance parameters and channel ambiguities are estimated jointly by very few pilots. Simulation results show that the proposed scheme significantly outperforms the existing methods, while at much lower training overhead. It also achieves a close performance to the derived CRLB.  To summarise, this thesis focuses on developing the estimation schemes of RF impairments and channels for 5G and B5G systems, by considering both OFDM and GFDM based multi-carrier techniques, half-duplex and full-duplex modes, single-user and multi-user systems. The developed estimation schemes are either pilot-aided with low complexity or semi-blind by subspace with high spectrum efficiency. This research work is an essential reference for academics and professionals involved in this topic. 

25.Effects of multiple stressors on the structure and function of stream benthic communities

Author:Noel Juvigny-Khenafou 2020
Abstract: The development of human activities has intensified and diversified the pressures applied to freshwater ecosystems. Particularly, land use stressors have been very pervasive and widespread. As a result, most freshwater systems are now under the influence of anthropogenic stressors. For instance, agricultural development and urbanisation have elevated the nutrients levels, facilitated the accumulation of chemicals, modified the natural flow velocities and promoted runoffs and sediment loads. Further, stressors often interact with each other, complicating the prediction of their effects on communities and ecosystem functioning; flow velocity and discharge reduction facilitate the accumulation of chemical and fine sediments. In order to evaluate the effect of multiple stressors and inform decision makers, investigations have been conducted worldwide on different trophic levels and ecosystem processes. Most notably, microbes, algae and macroinvertebrates have often been studied in isolation using taxonomic and now molecular methods. However, communities are made of complex population dynamics involving all trophic levels over time, and emergent ecosystem properties such as decomposition or net productivity are the result of multiple interactions between biotic and abiotic parameters. This calls for more holistic approaches encompassing as many facets of biodiversity as possible.   To investigate the effect of multiple land use stressors associated with agriculture and urbanisation, a highly replicated streamside field mesocosm experiment was built and performed in a near-pristine montane environment. The work was conducted in Autumn 2018 in the Jiulongfeng Nature Reserve, Huangshan, Anhui (China) and consisted of 64 experimental units naturally colonised by stream organisms for 3 weeks. I used a 4-factor full-factorial design, manipulating fine sediment deposition, flow velocity and nutrient concentration at two sampling times (2 and 3 weeks of exposure). Linear models were then applied to analyse the temporal response of microbial communities associated with both leaf litter decay and benthic biofilm formation, as well as the benthic macroinvertebrate communities. Additionally, to infer the emergent properties and functional characteristics of the different communities, four commonly used functional indices were investigated: (i) leaf litter decomposition in Chapter 2, (ii) databased predicted functional profile in Chapter 3, (iii) functional traits and (iv) functional diversity in Chapter 4. I then expanded my reflection from the knowledge acquired in the experimental side of my programme and outlined a novel framework to tackle multiple stressors interactions in riverine networks (Chapter 5).    The molecular analysis of microbial communities showed different impacts on species composition of the different stressors between microbes associated with leaf-litter decomposition and with biofilm development. Indeed, whilst nutrient enrichment and flow velocity reduction appeared to be the most pervasive factors affecting microbial decomposers communities on leaf substrates, fine sediment deposition and flow velocity reduction were most important for biofilm communities. Fine sediment deposition and flow velocity reduction were also the dominant factors driving macroinvertebrate community composition. Furthermore, both molecular analyses indicated that microbial clusters could be identified in response to the dominant stressors. In terms of interactions, 2-way interactions involving sediment and flow velocity reduction (sediment × flow velocity reduction) or nutrient enrichment and sediment (nutrient enrichment × sediment) were the most pervasive overall; 3-way interactions involving nutrient enrichment, sediment deposition and flow velocity reduction  (nutrient enrichment × sediment × flow velocity reduction) were also detected. Furthermore, temporal dynamics were also fairly widespread, highlighting the importance of integrating a temporal factor in multiple stressor studies. Finally, in accordance with the existing literature, changes in abiotic factors often led to functional rearrangements of the different communities underlying the environmental filtering and niche selection processes operating in the system.   From integrating the findings of this thesis into the wider subject area, I suggest ecosystem approach to multiple stressor interaction research. Specifically, I propose that future work adopt a spatiotemporal framework better integrating the energy fluxes across trophic levels and the flow of resources and material through riverine networks. Further, combining alpha diversity indices with functional traits aids understanding of the mechanisms that yield emergent ecosystem properties, such as productivity. Together, it is anticipated that spatiotemporal networks and functional measurements will facilitate prediction of the future stability of freshwater systems under stressor accumulation.  

26.Investigation on the electrochemical performance of the Silicon and Germanium base lithium-ion batteries

Author:Chenguang Liu 2020
Abstract:Lithium ion batteries (LIBs) have currently dominated the commercial market owing to the environmental benignity, suitable energy density, and long cycle lifetime. The commercial LIBs are commonly using graphite as anode materials, however, it has become clear that the theoretical capacity (~372 mAh g-1) of graphite has nearly reached the bottlenecks with little room for further exploration, and also the energy density and rate performance of existing LIBs are not sufficient for some advanced electronics equipment such as smart watch, and micro implantable biosensor system. With increasing demand and market potential, the worldwide academia researches and industrial community have been focused on investigating anode materials to achieve desirable power density, high rate performance, and long-term stability energy storage system, generating further impetus on flexible electrochemical applications, such as wearable devices, portable electronic devices especially for implant biological equipment. Alternative anode materials such as metal (Si, Ge and Sn) and metal oxide (Co3O4, SnO2 and GeO2) have been considered. Among them, the Si and germanium oxide have the highest theoretical gravimetric capacity in the elementary substance and oxide-based anode material respectively, which have been proposed as the best candidates for rechargeable battery anode. However, some challenges for these anode materials are also obvious due to the low conductivity and large volume expansion (> 300%) during the usage of LIBs. This expansion problem causes the pulverization of active materials and the repeated formation of the solid electrolyte interface (SEI) on that, resulting in the loss of interparticle electrical contact, and consequently deteriorating the battery cycle lifetime and capacity performance. In this work, we firstly demonstrated a facile method to fabricate a flexible alloyed copper/silicon core-shell nanoflowers structure anchored on the three-dimensional graphene foam as a current collector. In electrochemical testing, the resulting copper/silicon core-shell nanoflowered electrode demonstrates a high initial capacity of 1869 mAh g-1 at 1.6 A g-1, with a high retention rate of 66.6 % after 500 cycles. More importantly, at a high current density of 10 A g-1, this anode remains a high capacity retention > 63% (compared with the highest capacity 679 mAh g-1), offering enormous potential for energy storage applications. Secondly, we introduced a facile method to synthesize an amorphous GeOx-coated MXene nanosheet structure as the anode in lithium-ion batteries. For electrochemical performance, this GeOx/MXene nanosheet exhibited a reversible capacity of 950 mA h g-1 at 0.5 A g-1 after 100 cycles. It is indicated that the GeOx/MXene nanosheet structure can significantly improve the stability during the lithiation/delithiation prosses, with the enhanced capacity by the improvement of processes' kinetics. Thirdly, we built up a facile equipment to measure the high frequency capacitance change of silicon composite electrode. As this high frequency situation, the hypothesis circuit of the coin cell could be seemed as a combination of geometrical capacitance and resistance. For the alloy anodes which exhibited huge volume expanse during the lithiation/delithiation processes, the change of geometrical capacitance could be ascribed to the stress evolution and pulverization effect. Thereby the variation trend of the stress and pulverization could be determined by the change geometrical capacitance change.  To conclude, this project mainly focused on the pulverization and stress effect of the anode materials with alloying lithiation type. The strategies of first and second work were using the nanostructure engineering and 2D materials to release the stress and prevent the pulverization in the electrode. The results from these electrodes exhibited a stable electrochemical performance. Meanwhile, the rate performance of these electrodes was also improved by the additive of highly conductivity materials (e.g., copper, graphene, and MXene). To further investigate the consequence of severe volume expansion, we also built a high-frequency capacitance characterization system to perform the in-situ measurement of stress evolution and pulverization in coin cell with composite Si anode. That demonstrated the expected behavior corresponding to the electrode in the different states of charging.

27.Solar Photovoltaic Power Intermittency Under Passing Clouds: Control, Forecasting, and Emulation

Author:Xiaoyang Chen 2021
Abstract:Solar photovoltaic (PV) energy is becoming an increasingly vital source in electricity grids for energy harvesting. Inspired by the regulatory incentives and plummeting cost, the integration of utility-scale PV systems into the power grid is boosting. Nonetheless, due to the natures of cloud movements, PV system exhibits rapid power ramp-rates (RR) in the output pro?les, which poses signi?cant challenges for system operators to maintain grid transient stability. In this context, this thesis focuses on the management of cloudinduced solar PV intermittency. Three aspects for coping with solar intermittency are addressed, namely, control, forecasting, and emulation. Firstly, from the control aspect, two predictive PV power RR control (PRRC) strategies are presented. To regulate system RRs, conventional methods are implemented either by active power curtailment (APC) or energy storage control (ESS). However, current APC method cannot deal with the ramp-down ?uctuations, and the integration of an ESS is still costly. On this point, two innovative PRRC strategies are proposed, which are based on a solar nowcasting system. The ?rst strategy does not require any ESS. With the prior knowledge of upcoming RRs, PV generation can be regulated before the actual shading occurs. The second strategy improves the conventional ESS method with minimal support of energy storage. The results show that both of the proposed strategies can e?ectively comply with RR regulations, and outperform the conventional methods. Then, in terms of forecasting, an improved sensor network-based spatio-temporal nowcasting method is developed. The proposed nowcasting method overcomes the shortcomings that typically associated with existing sensor network-based nowcasting methods, such as predictor mis-selection, inconsistent nowcasting, and poor model adaptability. The experimental results reveal that the proposed nowcasting method is more suitable for predicting system RRs. Subsequently, the operability of solar nowcasting for PRRC practice is demonstrated. To that end, temporal issues related to operational solar nowcasting are identi?ed, and their e?ects on nowcasting and PV control performance are evaluated. Lastly, from the emulation aspect, this thesis sets forth a partial shading emulator and a cloud shadow model, which can emulate the module-level responses of utility-scale PV systems under passing clouds. Based on the emulation tools, the characteristics of PV system RRs are comprehensively investigated across various system and cloud shadow attributions. The results indicate that a utility-scale PV system can frequently violate the RR limit imposed by grid operators. Hence, advanced RR control strategies should be essential for system operators to comply with the RR regulations.

28.Simultaneous Communication and Power Transfer for WBAN/WPAN Applications

Author:Zhenzhen Jiang 2021
Abstract:Wireless body and personal area networks have become commonplace in recent years in industrial, medical, and consumer-based applications, allowing a collection of devices such as medical sensors to be distributed around a person’s body or within their direct vicinity, to communicate with each other or a network controller to provide convenient personal services. Distributed devices are typically compact and can even be located within the human body. This produces several bottlenecks relating to RF ability and power availability which are addressed here. In this thesis, two antennas are developed. The first is designed for implantable and ingestible applications offering robust wideband performance, covering all the useable licenced operating bands, in the complex material characteristic environment of the human body. The radiation characteristics of the proposed antenna outperform other published work with a smaller size, achieved through the novel application of split-ring resonators. The second is an off-body antenna which concurrently provides appropriately polarised bands for indoor and outdoor localisation and data communication. For its minimised size and wide bandwidth, this antenna also outperforms other antennas for WPAN applications published in the literature. Two methods for simultaneous wireless information and power transfer have been proposed in this work, based on novel theoretical ideas and hardware implementations. A symbol splitting system separates the information- and non-information- carrying components of a signal, using each for data reception and energy harvesting, respectively. The second method makes use of the characteristic of the requisite rectifier in the power conversion from RF to DC, recycling the inevitable third harmonic for data reception. The hardware required to achieve both methodologies utilise couplers and each architecture has been proven feasible through simulation and measurement. They provide comparable performance to other published systems, offering a compact, efficient, and convenient route to simultaneous wireless information and power transfer.

29.Exploring the mechanical behaviour of granular materials considering particle shape characteristics: a discrete element investigation

Author:Shivaprashanth Kumar Kodicherla 2021
Abstract:Discrete element method (DEM) is a useful numerical tool for analysing complex mechanical behaviour of granular materials as it considers the interaction at discrete contact points. In general, most of the DEM software packages use spherical particles by default because of easy contact detection and less computational cost. However, researchers confirmed that particle shape plays a significant role in exploring the mechanical behaviour of granular materials. Due to upgraded computation resources, nowadays it is possible to simulate the mechanical behaviour of granular materials considering true geometric shapes of particles. The key objective of the current research is to investigate the mechanical behaviour of granular materials considering particle shape characteristics. For that purpose, two basic geotechnical laboratory tests, i.e., direct shear test and triaxial test, are considered in this thesis.  The current research uses a commercial DEM code-named Particle Flow Code (PFC) developed by Itasca. An attempt was made to generate realistic particle shapes considering their major plane of orientations using a built-in clump mechanism in PFC. A series of DEM simulations were performed to investigate the sensitivity of the macroscopic specimen response to some specific parameter (e.g., particle numbers, loading rate). Based on the sensitivity analysis, selected microscopic parameters were selected to validate the DEM model with the experimental direct shear test results. To investigate the effects of particle elongations on the mechanical behaviour of granular materials, a series of simulations of direct shear tests and triaxial tests were performed using a range of dimensionless elongation parameters. The evolution of elongated particles was investigated at macro-and micro- scale levels. Moreover, the relationships between elongation parameter and critical state parameters were established.  A series of triaxial test simulations were performed considering two morphological descriptors and their mechanical behaviour was investigated at the macro- and micro-scale levels. In addition, a triaxial test environment was implemented to investigate the mechanical response of granular materials under different loading paths (i.e., axial compression (AC), axial extension (AE), lateral compression (LC) and lateral extension (LE)). The grain-scale interactions in terms of coordination number and deviator fabric were also investigated. Furthermore, the relationships were established among strength, dilatancy and state parameter concerning critical states. 

30.Authenticated Key Exchange Protocols with Unbalanced Computational Requirements

Author:Jie Zhang 2018
Abstract:Security is a significant problem for communications in many scenarios in Internet of Things (IoT), such as military applications, electronic payment, wireless reprogramming of smart devices and so on. To protect communications, a secret key shared by the communicating parties is often required. Authenticated key exchange (AKE) is one of the most widely used methods to provide two or more parties communicating over an open network with a shared secret key. It has been studied for many years. A large number of protocols are available by now. The majority of existing AKE protocols require the two communicating parties execute equivalent computational tasks. However, many communications take place between two devices with significantly different computational capabilities, such as a cloud center and a mobile terminal, a gateway and a sensor node, and so on. Most available AKE protocols do not perfectly match these scenarios. To further address the security problem in communications between parties with fairly unbalanced computational capabilities, this thesis studies AKE protocols with unbalanced computational requirements on the communicating parties. We firstly propose a method to unbalance computations in the Elliptic Curve Diffie-Hellman (ECDH) key exchange scheme. The resulting scheme is named as UECDH scheme. The method transfers one scalar multiplication from the computationally limited party to its more powerful communicating partner. It significantly reduces the computational burden on the limited party since scalar multiplication is the most time-consuming operation in the ECDH scheme. When applying the UECDH scheme to design AKE protocols, the biggest challenge is how to achieve authentication. Without authentication, two attacks (the man-in-the-middle attack and the impersonation attack) can be launched to the protocols. To achieve authentication, we introduce different measures that are suitable for a variety of use cases. Based on the authentication measures, we propose four suites of UECDH-based AKE protocols. The security of the protocols is discussed in detail. We also implement prototypes of these protocols and similar protocols in international standards including IEEE 802.15.6, Transport Layer Security (TLS) 1.3 and Bluetooth 5.0. Experiments are carried out to evaluate the performance. The results show that in the same experimental platform, the proposed protocols are more friendly to the party with limited computational capability, and have better performance than similar protocols in these international standards.

32.EMPIRICAL ESSAYS ON ENTREPRENEURIAL FIRM GROWTH: From the privately entrepreneurial to newly public stage

Author:Jianwen Zheng 2021
Abstract:The thesis contains three papers that focus on both private entrepreneurial firms and firms at the newly public stage. In regard to the privately entrepreneurial stage, Chapter 2 adopts an integrated signalling and screening perspective to investigate how investors perceive various signals sent by different firms across early financing stages. Through the use of multiple case studies of signaller?receiver dyads, Chapter 2 unexpectedly identifies a signal interpretation process model with three steps—extracting the fundamental signal; orchestrating signal compositions; and scrutinising signal consistency—and proposes differences among these three steps between the angel financing stage and the venture capital financing stage. Overall, Chapter 2 provides insights for the entrepreneurial financing literature by identifying a dynamic and temporal effect of different types of signals on high-technology entrepreneurial firms’ equity financing acquisition. Specifically, the findings of Chapter 2 indicate that some signals are persistent while others are temporary across different stages of a venture’s life cycle. Regarding the newly public stage, Chapter 3 examines how outside chief executive officer (CEO) succession affects newly public ventures’ growth. Building upon the evolutionary perspective, the thesis argues that outside CEOs can play a transformational role at the newly public stage because such CEOs are more aware of and motivated to break up organisational inertia for firm growth. The findings based on a sample of Chinese newly public firms between 2009 and 2018 indicate that newly public firms with outside CEO succession have stronger growth, and that this effect is stronger when these outside CEOs possess related experience in managing listed firms. The study further finds that following outside CEO succession, promotion of executives internally and adding of new senior executive roles can help outside CEOs to better play a transformational role, which strengthens firm growth. Based on the sample, Chapter 4 investigates the compromising decision arising from interactions between large shareholders based on the nature of different growth actions. Building on principal–principal agency theory, the thesis suggests that although acquisitive actions can promote growth, the second largest shareholder tends to discourage such growth action choices because of the potentially high agency risks. Instead, the second largest shareholder tends to encourage organic growth action choices even though such actions may produce a lower growth rate. The findings show that the second largest shareholder plays a dual role in monitoring the largest shareholder decision.

33.Unraveling the epitranscriptomes with bioinformatics approaches

Author:Kunqi Chen 2021
Abstract:RNA modification has emerged as an important layer for gene regulation, where biological functions are modulated by reversible post-transcriptional RNA modifications. N6-methyladenosine (m6A) is the most prevalent RNA modification on mRNAs and lncRNAs, and plays a pivotal role during various biological processes and disease pathogenesis. In this thesis, I presented the four bioinformatics approaches/applications to unravel the m6A epitranscriptome. We collected a total of 442,162 reliable m6A sites identified from seven base-resolution technologies and the quantified (rather than binary) epitranscriptome profiles estimated from 1,363 high-throughput sequencing samples to build the ‘m6A -Atlas’ database. As experimental approaches for studying the epitranscriptome are technically challenging and expensive, we used the collected base-resolution data to train a high-accuracy predictor ‘WHISTLE’ for m6A site identification from RNA sequences. Moreover, this prediction method was further extended to infer RNA modification-associated genetic variants to uncover potential epitranscriptome pathogenesis involving eight different types of RNA modification. In the last chapter, we presented a convenient measurement weighting strategy for enhanced detection of RNA co-methylation modules by tolerating the artifacts generated from epitranscriptome sequencing technology.

34.Development of NanoBRET-based assays to determine ligand binding affinities and ligand-induced selective signalling of the human GnRH receptor

Author:Li Shen 2021
Abstract:Gonadotropin-releasing hormone (GnRH) is a pivotal regulator of the human reproductive system. Kisspeptin (KP) neurons act as the gate keeper of the GnRH neurons. GnRH and KP are both peptide hormones act on their cognate receptors, which are human GnRH receptor (hGnRHR) and KISS1 receptor (hKISS1R). They both belong to the superfamily of guanine nucleotide binding protein (G protein)-coupled receptors (GPCRs), which can activate G-protein dependent downstream signalling pathways, eventually lead to multiple cellular outcomes. In addition to their important roles in reproduction, they also inhibit proliferation and/or metastasis of cancer cells. Therefore, both of the receptors are important drug targets. NanoBRET-based ligand binding assays were developed to investigate ligand and receptor interactions. Fluorescently labelled peptide analogues, Fluorescent-GnRH and BODIPY630/650-GnRH, were firstly conceptualized, purchased, and evaluated. The Fluorescent-GnRH retained its receptor binding affinity, agonist activity and specificity, similar to that of the endogenous ligand, GnRH I. Thus, it was used to act as an energy acceptor in NanoBRET-based ligand binding assays, and also used in imaging studies of GnRHR. In addition, hGnRHR was tagged with Nluc at its N-terminus (N-Nluc-hGnRHR) to act as the energy donor in NanoBRET-based ligand binding assays. A secretory signal peptide (S) from interleukin-6 (IL-6) was added (S-N-Nluc-hGnRHR), to enhance its membrane expression. Similarly, a NanoBRET-based ligand binding assay with Fluorescent-KP-18 and S-N-Nluc-hKISS1R was also established. Overall,  a NanoBRET-based ligand binding assay format for high-throughput drug screening for hGnRHR has been established; thus, it has a great potential for drug development. Other NanoBRET-based assays were conducted to examine differential G protein coupling profiles of hGnRHR and hKISS1R, when stimulated with different ligands. The NanoBRET-based assay using hGnRHR-rTRHR tail-C-Nluc indicates that hGnRHR couples to Gi1, Gq, and G12, but not Gs, when stimulated with GnRH I and GnRH II. Overall, GnRH II seems to exhibit similar potency as GnRH I in activating Gq but seems to be less potent in activating Gi1 and G12. The reduction in activity is probably mainly caused by Tyr8 substitution in GnRH II. Similarly, after stimulation of KP-10 and KP-14, activated hKISS1R-C-Nluc showed a predominant coupling to Gq using the same assay. Additionally, activated hKISS1R-C-Nluc displayed a weak coupling to Gi1. However, whether this coupling is functional needs to be further investigated. To sum up, NanoBRET-based assays have been applied to determine ligand-induced selective signalling of hGnRHR and hKISS1R, and these findings could be important for the development of selective drugs that only activate desired receptor-mediated signalling pathways, while bypassing the others.

35.An experimental study on turbulent flow in asymmetric compound channels

Author:Prateek Kumar Singh 2022
Abstract:This thesis presents research into the turbulent flow characteristic of open channels of complex cross-sections, with explicit attention to the interfacial region between the main channel and floodplain(s). The scope of the research includes two components: a series of detailed experimental investigations; the mathematical model development of coefficient of apparent shear stress for estimating zonal and overall discharge for complex asymmetric compound open channels. The primary goal of the experimental investigation was the procurement of high-quality data covering a well-defined and controlled range of hydrodynamic parameters. In support of the following objective, thirty-three sets of experiments have been undertaken and analysed in asymmetric compound channels to investigate flow behaviour over the floodplain and main channel interface. Laboratory experiments were performed under uniform flow conditions for new configurations with differential floodplain(s) width and multi-stage cases to fill the gap in the datasets of asymmetric compound open channels. Fundamental measurements were taken for estimating depth-averaged velocity, Reynolds shear stress, secondary currents, and apparent shear stress to examine the transverse current interaction of two-stage for new configurations of asymmetric compound channels using down-probe and side-probe acoustic Doppler velocimeter. The objective of the theoretical investigation was to obtain the generalized behaviour of the new configuration for the interaction mechanism. Flow interaction between two sub-sections affects the overall discharge capacity and conveyance distribution in compound open channels. Many investigators attempted to estimate flow interaction regarding apparent shear stress acting on the imaginary plane between the floodplain and main channel. However, previous models are neither generalized for asymmetric channels nor applied to a wide range of data sets, including field data, even though the apparent shear stress for asymmetric channels is higher than symmetric channels for the same flow depth and geometrical congruency. The momentum exchange models used in this thesis were motivated by scaling arguments and allowed a simple analytical solution for the zonal discharge in each section. However, it was found that the apparent shear models perform differently based on different depth ratios. None of the previous models performed well in channels with a low depth ratio. The different models for apparent shear based on width ratio and slope were found to give mixed results, discussed in detail. The resulting new models for the coefficient of the apparent shear stress are proposed to improve the zonal and overall discharge estimation for these new configurations. Models revealed that the coefficient is strongly dependent on the depth ratio for different ratios of bankfull height to floodplain width in these new configurations. The proposed new models can be applied to laboratory and field data without calibration.

36.Control and Optimization of the Dual-Active-Bridge Converter for Future Smart Grid Application

Author:Haochen Shi 2020
Abstract:The modern smart grid requires flexible control ability, high transmission efficiency, and good robustness due to contingencies. Besides, a growing number of power stations and load is Direct Current (DC) power, such as photovoltaic power stations, battery energy storage stations, most consumer electronics like a computer. Thus the DC power transmission systems, such as DC solid-state transformers (SSTs) can be utilized to reduce the volume and losses of the transmission system. Among various DC SSTs structures, the DC SSTs based on dual active bridge (DAB) converter is considered as promising topology due to its symmetrical structures, bidirectional power flow capacity, wide soft switching region and flexible control ability. As a key component of DC-SSTs, the operation of the DAB converter will determine the overall performance of the whole system. Thus the improvement of DAB is essential to DC-SSTs and modern smart grid applications. In this thesis, steady-state and dynamic state operation, as well as soft switching behavior of DAB converter, have been studied. For improving the steady-state performance of the DAB converter, multiple optimizations are proposed to reduce the backflow current or reactive power and extend the soft switching region for improving the transmission efficiency. Besides, the frequency domain model is introduced to further reduce the complexity of the optimization model. The effectiveness of those optimization schemes has been verified by experimental results. Compared with traditional phase-shift control, these proposed optimization methods can significantly increase the transmission efficiency. Furthermore, the multiple natural switching surfaces boundary control is proposed to enhance the dynamic performance of the DAB converter, especially for start-up and voltage variation conditions. It can achieve a fast-dynamic response and eliminating DC bias current. Both simulation and experimental results have been presented to prove the superiority of the proposed method. Compared with traditional closed-loop control based on the PI controller, the proposed boundary control can dramatically accelerate the dynamic response. Moreover, the resonant transition for different switching conditions during the dead-time period has been investigated. Then, the phase correction method and variable dead-time are proposed to compensate the phase difference between the gate signal and actual waveform and power losses during dead-time. The effectiveness of those methods is validated by comparing the proposed method with a fixed dead-time method through the experimental result. It suggests that the proposed dead-time compensation and various dead-time method can correct the phase delay and improve transmission efficiency. 

37.Variational Inequalities and Optimization Problems

Author:Yina LIU 2015
Abstract:The primary objective of this research is to investigate various optimization problems connected with partial differential equations (PDE). In chapter 2, we utilize the tool of tangent cones from convex analysis to prove the existence and uniqueness of a minimization problem. Since the admissible set considered in chapter 2 is a suitable convex set in $L^infty(D)$, we can make use of tangent cones to derive the optimality condition for the problem. However, if we let the admissible set to be a rearrangement class generated by a general function (not a characteristic function), the method of tangent cones may not be applied. The central part of this research is Chapter 3, and it is conducted based on the foundation work mainly clarified by Geoffrey R. Burton with his collaborators near 90s, see [7, 8, 9, 10]. Usually, we consider a rearrangement class (a set comprising all rearrangements of a prescribed function) and then optimize some energy functional related to partial differential equations on this class or part of it. So, we call it rearrangement optimization problem (ROP). In recent years this area of research has become increasingly popular amongst mathematicians for several reasons. One reason is that many physical phenomena can be naturally formulated as ROPs. Another reason is that ROPs have natural links with other branches of mathematics such as geometry, free boundary problems, convex analysis, differential equations, and more. Lastly, such optimization problems also offer very challenging questions that are fascinating for researchers, see for example [2]. More specifically, Chapter 2 and Chapter 3 are prepared based on four papers [24, 40, 41, 42], mainly in collaboration with Behrouz Emamizadeh. Chapter 4 is inspired by [5]. In [5], the existence and uniqueness of solutions of various PDEs involving Radon measures are presented. In order to establish a connection between rearrangements and PDEs involving Radon measures, the author try to investigate a way to extend the notion of rearrangement of functions to rearrangement of Radon measures in Chapter 4.

38.Evolutional and Swarm Algorithms Optimized Density-Based Clustering for Data Analytics

Author:Chun Guan 2018
Abstract:Clustering is one of the most widely used pattern recognition technologies for data analytics. Density-based clustering is a category of clustering methods which can find arbitrary shaped clusters. A well-known density-based clustering algorithm is Density- Based Spatial Clustering of Applications with Noise (DBSCAN). DBSCAN has three drawbacks: firstly, the parameters for DBSCAN are hard to set; secondly, the number of clusters cannot be controlled by the users; and thirdly, DBSCAN cannot directly be used as a classifier. With addressing the drawbacks of DBSCAN, a novel framework, Evolutionary and Swarm Algorithm optimised Density-based Clustering and Classification (ESA-DCC), is proposed. Evolutionary and Swarm Algorithm (ESA), has been applied in various different research fields regarding optimisation problems, including data analytics. Numerous categories of ESAs have been proposed, such as, Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), Differential Evaluation (DE) and Artificial Bee Colony (ABC). In this thesis, ESA is used to search the best parameters of density-based clustering and classification in the ESA-DCC framework to address the first drawback of DBSCAN. As method to offset the second drawback, four types of fitness functions are defined to enable users to set the number of clusters as input. A supervised fitness function is defined to use the ESA-DCC as a classifier to address the third drawback. Four ESA- DCC methods, GA-DCC, PSO-DCC, DE-DCC and ABC-DCC, are developed. The performance of the ESA-DCC methods is compared with K-means and DBSCAN using ten datasets. The experimental results indicate that the proposed ESA-DCC methods can find the optimised parameters in both supervised and unsupervised contexts. The proposed methods are applied in a product recommender system and image segmentation cases.

39.Measuring Tail Operational Risk under Extreme Losses

Author:Yishan Gong 2022
Abstract:As a lesson from the severe losses of $827 million by UK merchant bank Barings in 1995 , qualitative and quantitative modelling in operational risk started to attract more research attention in banking and insurance system. Such an operational risk would lead to serious consequence, even bankruptcy. Hence, it is necessary for financial institutions to model and avoid the operational risk. With this in mind, this thesis investigates important topics in quantitatively estimating of operational risk. We use heavy-tailed distribution functions to model the loss severities, and use several tools, such as copulas and multivariate regular variation, to model the dependence structures. Firstly, we consider both univariate and multivariate operational risk models, in which the loss severities are modelled by a series of weakly tail dependent and heavy-tailed positive random variables, and the loss frequency processes are general counting processes. In such models, we study the limit behaviors for the Value-at-Risk and Conditional Tail Expectation of aggregate operational risks. The methodology is based on capital approximation within the framework of the Basel II/III regulatory capital accords, which is the so-called Loss Distribution Approach. We also conduct simulation studies to check the accuracy of our obtained approximations and the (in)sensitivity due to different dependence structures or the heavy-tailedness of the severities. Next, in order to include both the weakly and strongly tail dependent case, we first consider a bivariate operational risk cell model, in which the loss severities are modelled by heavy-tailed and weakly (or strongly) dependent nonnegative random variables, and the frequency processes are described by two arbitrarily dependent general counting processes. In such a model, we then establish asymptotic formulas for the VaR and CTE of the total aggregate loss. Simulation studies are also conducted to check the accuracy of the obtained theoretical results via the Monte Carlo method. Later, we further extend the study of Gong and Yang (2021) to derive asymptotic approximations for VaR and CTE in a multivaraite operational risk cell model. Consider a multivariate operational risk cell model, in which the loss severities are modelled by heavytailed and weakly (or strongly) dependent nonnegative random variables, and the frequency processes are described by several arbitrarily dependent general counting processes. In this model, we establish asymptotic formulas for the VaR and CTE of the total aggregate loss. Numerical studies are conducted to examine the performance and to test the sensitivity of these asymptotic formulas. Lastly, to study the randomly weighted sums with infinitely many dependent terms, we consider the randomly weighted sums generated by a series of dependent subexponential primary random variables and a few arbitrarily dependent random weights. Then, we establish a Kesten-type upper bound for their tail probabilities in presence of subexponential primary random variables and under a certain dependence among them. As applications, then we derive asymptotic formulas for the tail probability and the VaR of total aggregate loss in a multivariate operational risk cell model.

40.Impact of Mixed Layer Vegetation on Open Channel Flows

Author:Hamidreza Rahimi 2020
Abstract:Vegetation plays a fundamental role in changing the flow characteristics of natural channels such as rivers. There are some studies which evaluated the impact of vegetation in open channel flows, but their modelling are not close to nature situation and usually only cover on single type of vegetation. Vegetation in natural channels is usually denser in lower layer and sparser in upper layer. For example, in riparian environments or floodplains, shorter vegetation (grasses or shrubs) is submerged, but the taller vegetation (e.g. trees) remains emergent. However, the impact of such mixed layer vegetation on the flow structure is not understood well, which is significant to reduce risk of flood and water environment. In this thesis, a series of laboratory experiments have been undertaken to study the impact of double layer vegetation in both emergent and submerged conditions. The vegetation was simulated by an array of PVC dowels with two different heights of 10 cm and 20 cm. The experiments were carried out in a rectangular hydraulic flume in Nanjing Hydraulic Research Institute and Xi’an Jiaotong-Liverpool University, respectively. Dowels were arranged in 5 different formations with each having 4 different flow depths to capture inflection of velocity over the mixing region between short and tall dowels. Velocity measurements were taken by using 3-D Acoustic Doppler Velocimeter (ADV) and Propeller Velocimeter in order to obtain key parameters such as turbulence intensity, Reynolds stress and turbulence kinetic energy. Ansys Fluent was used to simulate the same sets of vegetation configurations using K-? model with mesh sensitivity analysis to capture the inflection over the short vegetation region. The numerical study was explored for the double layer vegetation, and showed that the modelling results have good agreement with the experimental data for different vegetation configurations. New analytical models based on Reynolds-averaged closure principles have been also proposed to describe the vertical distribution of mean streamwise velocity in an open channel flow with double-layered vegetation. The proposed models were evaluated with extensive experimental data from our experiments and other published experiments available in the literature. The Root Mean Square Error (RMSE) of the velocity comparisons is found to be less than 0.0342 m/s, which is acceptable. In another series of experiments, vegetation dowels have been located only in one side of the channel while the other side is empty to simulate the partial vegetation and it has been noticed that a strong shear layer exists between non-vegetation and vegetation zones, indicating the reduction effect of vegetation on the velocity of flow. Furthermore, modifications are recommended to properly calculate the hydraulic radius and Manning's coefficient for the flow with double-layer vegetation. Finally it has been concluded that the flow in double layer vegetation is more complicated compare to flow through single layer vegetation and therefore the calibration of proposed models with three layers vegetation has been recommended.

41.Growth, Dielectrics Properties, and Reliability of High-k Thin Films Grown on Si and Ge Substrates

Author:Qifeng Lu 2018
Abstract:With the continuous physical size down scaling of Metal Oxide Semiconductor Field Effect Transistors (MOSFETs), silicon (Si) based MOS devices have reached their limits. To further decrease the minimum feature size of devices, high-k materials (with dielectric constants larger than that of silicon dioxide (SiO2), 3.9) have been employed to replace the SiO2 gate dielectric. However, there are higher densities of traps in high-k dielectrics than in the near trap free SiO2. Therefore, it is important to comprehensively investigate the defects and electron trapping/de-trapping properties of the oxides. Also, germanium (Ge) has emerged as a promising channel material to be used in high-speed metal-oxide-semiconductor (MOS) devices, mainly due to its high carrier mobility compared with that of silicon. However, due to the poor interface quality between the Ge substrate and gate dielectrics, it is difficult to fabricate high-performance germanium based devices. Therefore, an effective passivation method for the germanium substrate is a critical issue to be addressed to allow the fabrication of high quality Ge MOSFETs. To solve the above problems, the study of high-k materials and the passivation of germanium substrates was carried out in this research. In the first part of this work, lanthanide zirconium oxides (LaZrOx) were deposited on Si substrates using atomic layer deposition (ALD). The pulse capacitance-voltage (CV) technique, which can allow the completion of the CV sweep in several hundreds of microseconds, was employed to investigate oxide traps in the LaZrOx. The results indicate that: (1) more ii traps are observed in the LaZrOx when compared with measurements using the conventional CV characterization method; (2) the time-dependent trapping/de-trapping is influenced by edge times, pulse widths and peak to peak voltages (VPP) of the gate voltage pulses applied. Also, an anomalous behavior in the pulse CV curves, in which the relative positions of the forward and reverse CV traces are opposite to those obtained from the conventional measurements, was observed. A model relating to interface dipoles formed at the high-k/SiOx is proposed to explain this behavior. Formation of interface dipoles is caused by the oxygen atom density difference between the high-k materials and native oxides. In addition, a hump appears in the forward pulse CV traces. This is explained by the current displacement due to the pn junction formed between the substrate and inversion layer during the pulse CV measurement. Secondly, hafnium titanate oxides (TixHf1-xO2) with different concentrations of titanium oxide were deposited on p-type germanium substrates by ALD. X-ray Photoelectron Spectroscopy (XPS) was used to analyze the interface quality and chemical structure. The current-voltage (IV) and capacitance-voltage (CV) characteristics were measured using an Aglient B1500A semiconductor analyzer. The results indicate that GeOx and germinate are formed at the high-k/Ge interface and the interface quality deteriorates severely. Also, an increased leakage current is obtained when the HfO2 content in the TixHf1-xO2 is increased. A relatively large leakage current density (~10-3 A/cm2) is partially attributed to the deterioration of the interface between Ge and TixHf1-xO2 caused by the oxidation source from HfO2. The small band gap of iii the TiO2 also contributes to the observed leakage current. The CV characteristics show almost no hysteresis between the forward and reverse CV traces, which indicates low trap density in the oxide. Since deterioration of the interface quality was observed, an in-situ ZnO interfacial layer was deposited in the ALD system to passivate the germanium substrate. However, a larger distortion of the as-deposited sample was observed. Although the post deposition annealing (PDA) has a positive effect on the CV curves, there is an increase in frequency dispersion and the leakage current after PDA. Therefore, the ZnO interfacial layer is not an effective passivation layer for the germanium substrate. In addition, GeO is formed due to the reaction and GeO desorption from the gate oxide/Ge interface occurs, which also leads to the deterioration of the device performance. In the final part of this work, to circumvent the problems explored above, 0.1 mol/L propanethiol solution in 2-propanol, 0.1 mol/L octanethiol solution in 2-propanol, and 20% (NH4)2S solution in DI water were used to passivate the n-type germanium substrates before HfO2 dielectric thin films were deposited by ALD. The results show that an increase in the dielectric constant and a reduction in leakage current are obtained for the samples with chemical treatments. The sample passivated by octanethiol solution has the largest dielectric constant. The lowest leakage current density is observed for the sample passivated by (NH4)2S solution followed by the one passivated by octanethiol solution. In addition, effects of a TiN cap layer on the formation and suppression of GeO were investigated. It was found that the formation of GeO and iv desorption of the GeO form gate oxides/Ge interface are suppressed by the cap layer. As a result, an increase in dielectric constant from 8.2 to 13.5 and a lower leakage current density for a negatively applied voltage are obtained. Therefore, the passivation of the substrates by octanethiol or (NH4)2S solutions followed by the TiN cap layer is a useful technique for Ge based devices.

43.Dynamics of Learners' Emergent Motivational Disposition: The Case of EAP Learners at a Transnational English-Medium University

Author:Austin Cody Pack 2021
Abstract:This thesis aims to better understand the processes affecting the motivational dynamics of English for Academic Purposes (EAP) learners at a transnational education (TNE) university that uses English as its medium of instruction (EMI). It joins the ongoing discussion of how to leverage Complex Dynamic Systems Theory (CDST) to understand second language (L2) motivation and takes a special interest in understanding what demotivates students to study EAP. It employed a mixed methodology and two-stage research design to explore how EAP learners’ motivation changed over the course of a semester in their first year, as well as what the salient demotivating and motivating factors were for these students. First, motivation journals, motivation questionnaires, semi-structured interviews, and focus group discussions were leveraged to investigate how and why the motivation levels of 60 first year EAP students changed over a period of 10 weeks. Salient demotivating factors identified from the data were then further explored by means of a demotivation questionnaire that was administered to the larger student population (n=1517) in order to understand how frequently these factors were found to be a source of demotivation. Learners’ motivational disposition was found to be complex and multifaceted, changing frequently between motivated and demotivated states. Motivation constructs (e.g. L2 self guides, instrumentality, etc.) frequently used in previous L2 motivation studies did not sufficiently account for the changes in students’ motivational disposition from day to day. Instead, it was found that motivational disposition, or students’ willingness to expend effort to learn at any given moment, emerges from the complex and non-linear interaction of a multitude of factors internal and external to the language learner and language classroom. These factors exerted influences of different strengths on motivational disposition according to changes in time and context. Sources of demotivation were frequently associated with factors outside of the EAP classroom and sources of motivation were frequently associated with factors inside the EAP classroom. The study is significant for both theory and research methodology relating to L2 motivation. First, while CDST has been used as a metaphor for understanding dynamics of motivation, the current study provides evidence that characteristics of CDSs can be grounded in actual data (e.g. the emergent nature of motivation, sensitivity to initial conditions, etc.).  Second, based on these findings this thesis presents a new CDST informed model of language learning motivation. Third, it suggests it is necessary to move away from a binary way of thinking about motivational factors that categorizes them into a dichotomy of motivating/demotivating factors; a more complex and fluid understanding of motivational factors is needed. Lastly, it highlights the need for frequent sampling that ensures minimal time has passed between when students recollect motivating/demotivating experiences and the actual time those experiences occurred.

44.Multiview Video View Synthesis and Quality Enhancement using Convolutional Neural Networks

Author:Samer Jammal 2020
Abstract:Multiview videos, which are recorded from different viewpoints by multiple synchronized cameras, provide an immersive experience of 3D scene perception and more realistic 3D viewing experience. However, this imposes an enormous load on the acquisition, storage, compression, and transmission of multiview video data. Consequentially, new and advanced 3D video technologies for efficient representation and transmission of multiview data are important aspects for the success of multiview applications.  Various methods aiming at improving multiview video coding efficiency have been developed in this thesis, where convolutional neural networks are used as a core engine in these methods. The thesis includes two novel methods for accurate disparity estimation from stereo images. It proposes the use of convolutional neural networks with multi-scale correlation for disparity estimation. This method exploits the dependency between two feature maps by combining the benefits of using both a small correlating scale for fine details and a big scale for larger areas. Nevertheless, rendering accurate disparity maps for foreground and background objects with fine details in real scenarios is a challenging task. Thus, a framework with a three-stage strategy for the generation of high-quality disparity maps for both near and far objects is proposed. Furthermore, the current techniques for multiview data representation, even if they exploit inter-view correlation, require large storage size or bandwidth for transmission. Such bandwidth is almost linear with the number of transmitted views. To address thisproblem, we proposed a novel view synthesis method for multiview video systems. In this approach the intermediate views are solely represented using their edges while dropping their texture content. These texture contents can get synthesized using a convolutional neural network, by matching and exploiting the edges and other information in the central view. Experimental results verify the effectiveness of the proposed framework.  Finally, highly compressed multiview videos produce severe quality degradation. Thus, it is necessary to enhance the visual quality of highly compressed views at the decoder side. Consequentially,a novel method for multiview quality enhancement that directly learns an end-to-end mapping between the low-quality and high-quality views and recovers the details of the low-quality view is proposed.

45.Dual-functional carbon–based Interlayers towards high-performance Li-S batteries

Author:Ruowei Yi 2021
Abstract:For reducing carbon emission and alleviating pollution, people are gradually replacing the fossil fuel-employing combustion engines with new energy devices. The secondary batteries with high energy storage have become a hot alternative to power sources due to its zero emission during their operation. In recent years, as the most popular energy storage equipment in the battery market for mobile devices, lithium-ion battery is gradually showing a decline in the field of power battery, because its energy density (~ 150 Wh kg-1) has been unable to meet the demands of power equipment, and the current research has almost reached the theoretical capacity of lithium-ion battery electrode materials, and leaves little space for improvement. Therefore, academic research began to seek a variety of new battery systems to meet the needs of the industry. As a battery system based on the non-topological reaction between lithium anode and sulfur anode, lithium sulfur battery has a very high theoretical energy density (2567 Wh kg-1) and theoretical specific capacity (1672 mAh g-1), which is good enough to meet the energy density requirements (500-600 Wh kg-1) of power battery. Meanwhile, sulfur is of low cost and environmentally friendly, which is suitable for large-scale commercialization. Therefore, it is considered as a strong competitor of the next generation power supply. However, a series of shortcomings of lithium sulfur battery limit its large-scale application at present stage; for examples, the sluggish reaction kinetics of active sulfur and the degraded cyclic stability from shuttle effect. The improvement of both can ameliorate the rate performance and cycle stability of lithium sulfur battery, which are crucial to the practical application of power battery. In this thesis, in order to solve the above problems, the author first used a facile and scalable method to prepare carbon black/ PEDOT:PSS. The modified separator was applied to lithium sulfur battery as an improved interlayer of the cathode. The principle of improving sulfur cathode by the interlayer was studied by the electrochemical analysis. The high conductivity and polysulfide adsorption ability of the coating delivers an initial specific capacity of 1315 mAh g-1 at 0.2 C current, and 699 mAh g-1 at a high rate of 2 C current; secondly, for the purpose of reducing the density of the cathode interlayer, a three-dimensional graphene foam was chosen as the conductive substrate of the interlayer, and modified with the zinc oxide by atomic layer deposition (ALD), creating the self-standing three-dimensional graphene foam / nano zinc oxide interlayer. This interlayer leads to an initial specific capacity of 1051 mAh g-1 at a 0.5 C rate. Its low area density (0.15 mg cm-2) also reduces the influence on the energy density of the cathode. As a step forward, the two-dimensional Ti3C2Tx nanosheet (MXene) with high conductivity and polysulfide adsorption characteristics was selected as an alternative material of zinc oxide to modify the graphene foam (GFMX), which simplifies the synthesis process and enhances the electronic conductivity of the interlayer. After 120 cycles at 0.2 C, the lithium sulfur batteries still maintain a specific capacity of 867 mAh g-1 and 755 mAh g-1 at 2 C high rate current with the GFMX interlayer. In light of the significant improvement of the interlayer by MXene, the modified the MXene by an in-situ growth of nitrogen and nickel doped carbon nanosheets has been studied. Results show that the stacking of MXene is greatly reduced and the specific surface area of the material is increased, moreover, the adsorption capacity of polysulfides has been largely improved by the nitrogen doping. When using the obtained composite material as the separator coating, the lithium sulfur batteries exhibit 943 mAh g-1 specific capacity after 100 cycles at 0.2 C current, and 588 mAh g-1 specific capacity after 500 cycles at 1 C. The average cycle capacity decay rate is 0.069%, and the specific capacity of the high sulfur loading cathode (3.8 mg cm-2) is 946 mAh g-1, highlighting its potential applications in the high-performance lithium sulfur batteries.

46.Global Motion Compensation Using Motion Sensor to Enhance Video Coding Efficiency

Author:Fei Cheng 2018
Abstract:Throughout the current development of video coding technologies, the main improvements are increasing the number of possible prediction directions and adding more sizes and more modes for blocks coding. However, there are no major substantial changes in video coding technology. The conventional video coding algorithms works well for video with motions of directions parallel to the image plane, but their efficiency drops for other kinds of motions, such as dolly motions. But increasing number of videos are captured by moving cameras as the video devices are becoming more diversified and lighter. Therefore, a higher efficient video coding tool has to be used to compress the video for new video technologies. In this thesis, a novel video coding tool, Global Motion Estimation using Motion Sensor (GMEMS), is proposed. Then, a series related approaches are researched and evaluated. The main target of this tool is using advanced motion sensor technology and computer graphics tools to improve and extend the traditional motion estimation and compensation method, which could finally enhance the video coding efficiency. Meanwhile, the computational complexity of motion estimation method is reduced as some differences have been compensated. Firstly, a Motion information based Coding method for Texture sequences (MCT) is proposed and evaluated using H.264/AVC standard. In this method, a motion sensor commonly-used in smart-phones is employed to get the panning motion (rotational motion). The proposed method could compensate panning motion by using frame projection using camera motion and a new reference allocation method. The experimental results demonstrate the average video coding gain is around 0.3 dB. In order to apply this method to other different types of motions for texture videos, the distance information of the object in the scene from the camera surface, i.e. depth map, has to be used according to the image projection principle. Generally, depth map contains fewer details than texture, especially for the low-resolution case. Therefore, a Motion information based Coding scheme using Frame-Skipping for Depth map sequence (MCFSD) is proposed. The experimental results show that this scheme is effective for low resolution depth map sequences, which enhances the performance by around 2.0 dB. The idea of motion information assisted coding is finally employed to both texture sequence and depth map sequence for different types of motions. A Motion information based Texture plus Depth map Coding (MTDC) scheme is proposed for 3D videos. Moreover, this scheme is applied to H.264/AVC and the last H.265/HEVC video coding standard and tested for VGA resolution and HD resolution. The results show that the proposed scheme improves the performance under all the conditions. For VGA resolution under H.264/AVC standard, the average gain is about 2.0 dB. As the last H.265/HEVC enhances the video encoding efficiency, the average gain for HD resolution under H.265/HEVC standard drops to around 0.4 dB. Another contribution of this thesis is that a software plus hardware experimental data acquisition method is designed. The proposed motion information based video coding schemes require video sequences with accurate camera motion information. However, it is difficult to find proper dataset. Therefore, an embedded hardware based experimental data acquisition platform is designed to obtain real scene video sequences, while a CG based method is used to produce HD video sequences with accurate depth map.

47.An Integrated Life Cycle Assessment and System Dynamics Model for Evaluating Carbon Emissions from Construction and Demolition Waste Management of Building Refurbishment Projects

Author:Wenting Ma 2022
Abstract:Since the building sector accounts for more than one third of global carbon emissions, it is imperative that the sector mitigate its emissions to help reach the goal of the COP26 climate conference of achieving a global net zero by mid-century. Building refurbishment (BR) is key to reducing carbon emissions in the building sector by reducing the operational energy consumption of existing buildings instead of demolishing them and building new ones. China is a good example of a country encouraging refurbishment, since it has prioritized BR in its 14th Five-Year Building Energy Efficiency and Green Building Development Plan (2021-2025). Since the number of BR projects in China is therefore likely to significantly increase in the coming years, it is important to evaluate the carbon emissions associated with construction and demolition (C&D) waste to find optimal waste management solutions. However, there are no studies that have considered the carbon emissions of C&D waste management of BR projects from a whole life cycle perspective. This study fills the research gap by developing a novel LCA-SD model, which integrates the features of life cycle assessment (LCA) and system dynamics (SD) to evaluate the carbon emissions of C&D waste management of BR projects through non-linear and dynamic analysis from a whole life cycle perspective. Variables for evaluating the carbon emissions were first identified in four life cycle stages of C&D waste management of BR projects. Causal loop diagrams were then developed to demonstrate the interrelations of the variables in the different life cycle stages, and the novel LCA-SD stock and flow model was formulated based on the causal loop diagrams. The model was validated through a case study of a typical BR project in China. The validated LCA-SD model was used to compare and analyze waste management scenarios for the case study BR project by performing simulations of selected scenarios. The simulation results reveal that the secondary material utilization rate is the most effective independent variable for reducing carbon emissions from C&D waste management of the case BR project, 11.28% of total carbon emissions could be reduced by using 31% of secondary materials to substitute natural raw materials; improving the combustible waste incineration rate to 100% could reduce 6.42% of total carbon emissions; reducing 50% of the on-site waste rate could reduce 1.28% of total carbon emissions; while improving the inert waste recycling rate to 90% could only reduce 1% of total carbon emissions. From the whole life cycle perspective, the refurbishment material stage accounts for the highest carbon emissions, followed by the refurbishment material EOL stage, and the dismantlement stage, the refurbishment construction stage accounts for the least carbon emissions. The findings not only highlight the importance of cradle to cradle life cycle C&D waste management for mitigating carbon emissions from BR projects, but also demonstrate the effectiveness of the novel integrated LCA-SD model as an “experimental laboratory” for BR C&D waste management decision makers to conduct “what-if” dynamic simulation analysis for various scenarios before embarking on a project.

48.Optimization on the Electrical Performance of the Solution-processed Zinc Tin Oxide Thin-film Transistors and its Application Research for Artificial Synapses

Author:Tianshi Zhao 2022
Abstract:Thin-film transistors (TFTs), serving as the core components for the applications of the active matrix for liquid crystal displays (AMLCDs) and the active matrix for organic light emitting diodes (AMOLEDs), have been being intensively researched all over the world. For the past decades, in order to meet the display application requirements of high resolution, large screen size, and low power consumption, the metal oxides (MOs) semiconductors have been proposed and widely investigated for the fabrication of high-performance TFTs. Compared with the traditional TFTs based on the amorphous silicon (α-Si) technology, the MO based TFTs (MOTFTs) are reported to have much higher electron field-effect mobility (μFE) due to the large, spherical ns-orbitals (n≥4). Moreover, the MO semiconductor materials also have their advantages in transparent applications due to the wide bandgap (~3 eV). Therefore, the wide-bandgap MO semiconductors including indium (In), gallium (Ga), zinc (Zn), and Tin (Sn) based binary or multi-component oxides have gradually become the promising channel material candidates for advanced TFT based technologies. However, for the well-established vacuum-based MOs fabrication technologies such as magnetron sputtering, atomic layer deposition (ALD), and chemical vapor deposition (CVD), etc., the complex processes, high-demand equipment, and small depositing area heavily limit the development for low-cost MOs deposition. Therefore, the solution process, one feasible and facile route to deposit the MO films under an ambient condition was proposed and reported by the researchers. Nevertheless, every coin has two sides, there is often a trade-off between the low cost of solution methods and the high performance of TFTs. The solvent residues or incomplete annealing process may lead to the defects and ruin the performance of the devices. Accompanied by the challenges, many studies have been reported to deduce the side effects brought by solution process and plenty of breakthroughs also have been done. In another word, to fabricate the high electrical performance TFTs based on low-cost solution processes still have great room for development and are worthy of study. In this work, for environmental protection and cost reduction considerations, we mainly focus on the spin-coating based n-type In-free semiconductor zinc tin oxide (ZnSnO, ZTO). We firstly proposed a kind of deionized (DI) water solvent-based fabrication routine for ZTO semiconductor films. The fabrication process was operated under a low temperature (≤ 300℃) in air condition. Combining with the silicon dioxide (SiO2) dielectric layer, the TFTs with a μFE of 2 cm2V-1s-1 were successfully fabricated. Furthermore, with the help of the novel two-dimensional (2D) material MXene, we tuned the work function (WF) of the ZTO channel and optimized both the μFE (13.06 cm2V-1s-1) and the gate bias (GB) stability behaviors of the TFTs via depositing the homojunction structured channels. Subsequently, we replaced the SiO2 dielectric with the solution-processed high-k aluminum oxide (AlOx) films, the devices showed an increased μFE of 28.35 cm2V-1s-1 and applied to a resistor-load inverter successfully (Chapter 2). Secondly, besides the performance optimization, the solution-processed TFTs could also be applied to realize the advanced high-parallel neuromorphic network computing tasks. The TFTs that could meet this application requirement are regarded as the synaptic transistors (STs) and are decided to mimic the biological synapse. The operating basis is established on the hysteresis window in STs’ transfer characteristics and non-volatile multi-level variable channel conductance. Here we applied the MXene to the interface between the ZTO channel and the SiO2 dielectric layer and proposed a kind of floating-gate transistors (FGTs) with the functions of STs. The MXene induced FGTs (MXFGTs) successfully mimicked the typical behaviors of biological synapse under both the gate voltage (VGS) and channel incident ultraviolet (UV) light stimuli. To further explore the suitability of the MXFGTs in machine learning task, we utilized the classifier based on the artificial neural network (ANN) and the tested results of the devices to simulate the image classification process. The training and recognition results of the images based on the Modified National Institute of Standards and Technology (MNIST) database further proved the application potential of MXFGT in neural network (NN) system (Chapter 3). Finally, in Chapter 4, we further improved the light detecting behavior of the MXene based STs. A shell layer of germanium oxide (GeOx) was grown to cover the nanosheets of MXene through a facile solution method. The obtained GeOx-coated MXene (GMX) nanosheets were doped into the ZTO channel layer and fabricated into the GMX based STs (GMXSTs). Owning to the area enlarging function of the high electron density MXene core and the heterostructure of GeOx/ZTO bilayer, the GMXSTs showed excellent optoelectrical synaptic performance under visible light stimuli, which was highly improved over MXFGTs. Then, we applied the various responses of the devices under the different input lights into image target area detecting simulations. With the help of the detecting pre-process, the tasks of counting the fluorescent cells stained by 2-(4-Amidinophenyl)-6-indolecarbamidine dihydrochloride (DAPI) was correctly performed. Finally, the "night vision"-inspired and the brightness-adjusted image reconstruction results were presented, which further indicated the bright future of this kind of synaptic device in the application field of artificial visual perception.

49.Depth Assisted Background Modeling and Super-resolution of Depth Map

Author:Boyuan Sun 2018
Abstract:Background modeling is one of the fundamental tasks in the computer vision, which detects the foreground objects from the images. This is used in many applications such as object tracking, traffic analysis, scene understanding and other video applications. The easiest way to model the background is to obtain background image that does not include any moving objects. However, in some environment, the background may not be available and can be changed by the surrounding conditions like illumination changes (light switch on/off), object removed from the scene and objects with constant moving pattern (waving trees). The robustness and adaptation of the background are essential to this problem. Mixture of Gaussians (MOG) is one of the most widely used methods for background modeling using color information, whereas the depth map provides one more dimensional information of the images that is independent of the color. In this thesis, the color only based methods such as Gaussian Mixture Models (GMM), Hidden Markov Models (HMM), Kernel Density Estimation (KDE) are thoroughly reviewed firstly. Then the algorithm that jointly uses color and depth information is proposed, which uses MOG and single Gaussian model (SGM) to represent recent observations of the color and depth respectively. And the color-depth consistency check mechanism is also incorporated into the algorithm to improve the accuracy of the extracted background. The spatial resolution of the depth images captured from consumer depth camera is generally limited due to the element size of the senor. To overcome this limitation, depth image super-resolution is proposed to obtain the high resolution depth image from the low resolution depth image by making the inference on high frequency components. Deep convolution neural network has been widely successfully used in various computer vision tasks like image segmentation, classification and recognitions with remarkable performance. Recently, the residual network configuration has been proposed to further improve the performance. Inspired by this residual network, we redesign the popular deep model Super-Resolution Convolution Neural Network (SRCNN) for depth image super-resolution. Based on the idea of residual network and SRCNN structure, we proposed three neural network based approaches to address the problem of depth image super-resolution. In these approaches, we introduce the deconvolution layer into the network which enables the learning directly from original low resolution image to the desired high resolution image, instead of using conventional method like bicubic to interpolate the image before entering the network. Then in order to minimize the sharpness loss near the boundary regions, we add layers at the end of network to learn the residuals.

50.Clothing-based Interfaces for Multimodal Interactions

Author:Vijayakumar Nanjappan 2020
Abstract:Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Advances in fabric sensing technology allow us to combine multiple interface modalities together. However, most textile-based research uses unimodal approach and current input options have limitations such as gesture types and issues like low social acceptance when interactions are performed in public or in front of unfamiliar people. As an alternative, wrist-based gesture input has the extra benefit of supporting eyes-free interactions which are subtle, thus socially acceptable. In this research, we propose and develop two fabric-based multimodal interfaces (FABMMI) which supports, wrist, touch and combination of these gestures. To do that, we first investigated the acceptance and performance of using the wrist to perform multimodal inputs using FABMMIs for (1) in-vehicle controls and (2) handheld augmented reality (HAR) devices. Through the first user-elicitation study with 18 users, we devised a taxonomy of wrist and touch gestures for in-vehicle interactions using a wrist-worn FABMMI in a simulated driving setup. We provide an analysis of 864 gestures, the resulting in-vehicle gesture set with 10 unique gestures which represented 56% of the user preferred gestures. With our second user-elicitation study, we investigated the use of a fabric-based wearable device as an alternative interface option for performing interactions with HAR applications. We present results about users’ gestures preferences for hand-worn FABMMI by analysing 2,673 gestures from 33 participants for 27 HAR tasks. Our gesture set includes a total of 13 user-preferred gestures which are socially acceptable and comfortable to use for HAR devices and also derived an interaction vocabulary of the wrist and thumb-to-index touch gestures. To achieve the above-defined input possibilities, we developed strain sensors to capture wrist movements, and pressure sensors to detect touch inputs. Our sensors are graphene-modified polyester (PE) fabric and polyurethane (PU) foam respectively, on which the graphene loading is high, and it has better adhesion to both PE fabric and the PU foam, which can enhance the sensitivity and the lifetime of the sensors. Using our in-house developed sensors, we developed two prototypes: (1) WRISMMi, wrist-worn interface for in-vehicle interactions and (2) HARMMi, hand-worn device for HAR interactions. A linear regression model is used to set the global thresholds for different bending and pressing magnitude levels. We tested the suitability and performance of our prototypes for a set of interactions extrapolated from the two user-elicited studies. Our results suggest that FABMMIs are viable to support a variety of natural, eyes-free, and unobtrusive interactions in multitasking situations.

51.Learning Semantic Segmentation with Weak and Few-shot Supervision

Author:Bingfeng Zhang 2022
Abstract:Semantic segmentation, aiming to make dense pixel-level classification, is a core problem in computer vision. Requiring sufficient and accurate pixel-level annotated data during training, semantic segmentation has witnessed great progress with recent advances in a deep neural network. However, such pixel-level annotation is time-consuming and highly relies on human effort, and segmentation performance dramatically drops on unseen classes or the annotated data is not sufficient. In order to overcome the mentioned drawbacks, many researchers focus on learning semantic segmentation with weak and few-shot supervision, i.e., weakly supervised semantic segmentation and few-shot segmentation. Specifically, weakly supervised semantic segmentation aims to make pixel-level classification with weak annotations (e.g., bounding-box, scribble, and image-level) as supervision while few-shot segmentation attempts to segment unseen object classes with a few annotated samples. In this thesis, we mainly focus on image label supervised semantic segmentation, bounding-box supervised semantic segmentation, scribble supervised semantic segmentation, and few-shot segmentation. For weakly supervised semantic segmentation with image-level annotation, current approaches mainly adopt a two-step solution, which generates pseudo-pixel masks first that are then fed into a separate semantic segmentation network. However, these two-step solutions usually employ many bells and whistles in producing high-quality pseudo masks, making this kind of method complicated and inelegant. We harness the image-level labels to produce reliable pixel-level annotations and design a fully end-to-end network to learn to predict segmentation maps. Concretely, we firstly leverage an image classification branch to generate class activation maps for the annotated categories, which are further pruned into tiny reliable object/background regions. Such reliable regions are then directly served as ground-truth labels for the segmentation branch, where both global information and local information sub-branch are used to generate accurate pixel-level predictions. Furthermore, a new joint loss is proposed that considers both shallow and high-level features. For weakly supervised semantic segmentation with bounding-box level annotation, most existing approaches rely on a deep convolution neural network (CNN) to generate pseudo labels by initial seeds propagation. However, CNN-based approaches only aggregate local features, ignoring long-distance information. We proposed a graph neural network (GNN)-based architecture that takes full advantage of both local and long-distance information. We firstly transfer the weak supervision to initial labels, which are then formed into semantic graphs based on our newly proposed affinity Convolutional Neural Network. Then the built graphs are input to our graph neural network (GNN), in which an affinity attention layer is designed to acquire the short- and long-distance information from soft graph edges to accurately propagate semantic labels from the confident seeds to the unlabeled pixels. However, to guarantee the precision of the seeds, we only adopt a limited number of confident pixel seed labels, which may lead to insufficient supervision for training. To alleviate this issue, we further introduce a new loss function and a consistency-checking mechanism to leverage the bounding box constraint, so that more reliable guidance can be included for the model optimization. More importantly, our approach can be readily applied to bounding box supervised instance segmentation tasks or other weakly supervised semantic segmentation tasks, showing great potential to become a unified framework for weakly supervised semantic segmentation. For weakly supervised semantic segmentation with scribble level annotation, the regularized loss has been proven to be an effective solution for this task. However, most existing regularized losses only leverage static shallow features (color, spatial information) to compute the regularized kernel, which limits its final performance since such static shallow features fail to describe pair-wise pixel relationships in complicated cases. We propose a new regularized loss that utilizes both shallow and deep features that are dynamically updated in order to aggregate sufficient information to represent the relationship of different pixels. Moreover, in order to provide accurate deep features, we adopt a vision transformer as the backbone and design a feature consistency head to train the pair-wise feature relationship. Unlike most approaches that adopt a multi-stage training strategy with many bells and whistles, our approach can be directly trained in an end-to-end manner, in which the feature consistency head and our regularized loss can benefit from each other. For few-shot segmentation, most existing approaches use masked Global Average Pooling (GAP) to encode an annotated support image to a feature vector to facilitate query image segmentation. However, this pipeline unavoidably loses some discriminative information due to the average operation. We propose a simple but effective self-guided learning approach, where the lost critical information is mined. Specifically, through making an initial prediction for the annotated support image, the covered and uncovered foreground regions are encoded to the primary and auxiliary support vectors using masked ii GAP, respectively. By aggregating both primary and auxiliary support vectors, better segmentation performances are obtained on query images. Enlightened by our self-guided module for 1-shot segmentation, we propose a cross-guided module for multiple shot segmentation, where the final mask is fused using predictions from multiple annotated samples with high-quality support vectors contributing more and vice versa. This module improves the final prediction in the inference stage without re-training. 

52.Person Re-identification and Tracking in Video Surveillance

Author:Yanchun Xie 2020
Abstract:Video surveillance system is one of the most essential topics in the computer vision field. As the rapid and continuous increasement of using video surveillance cameras to obtain portrait information in scenes, it becomes a very important system for security and criminal investigations. Video surveillance system includes many key technologies, including the object recognition, the object localization, the object re-identification, object tracking, and by which the system can be used to identify or suspect the movements of the objects and persons. In recent years, person re-identification and visual object tracking have become hot research directions in the computer vision field. The re-identification system aims to recognize and identify the target of the required attributes, and the tracking system aims at following and predicting the movement of the target after the identification process. Researchers have used deep learning and computer vision technologies to significantly improve the performance of person re-identification. However, the study of person re-identification is still challenging due to complex application environments such as lightning variations, complex background transformations, low-resolution images, occlusions, and a similar dressing of different pedestrians. The challenge of this task also comes from unavailable bounding boxes for pedestrians, and the need to search for the person over the whole gallery images. To address these critical issues in modern person identification applications, we propose an algorithm that can accurately localize persons by learning to minimize intra-person feature variations. We build our model upon the state-of-the-art object detection framework, i.e., faster R-CNN, so that high-quality region proposals for pedestrians can be produced in an online manner. In addition, to relieve the negative effects caused by varying visual appearances of the same individual, we introduce a novel center loss that can increase the intra-class compactness of feature representations. The engaged center loss encourages persons with the same identity to have similar feature characteristics. Besides the localization of a single person, we explore a more general visual object tracking problem. The main task of the visual object tracking is to predict the location and size of the tracking target accurately and reliably in subsequent image sequences when the target is given at the beginning of the sequence. A visual object tracking algorithm with high accuracy, good stability, and fast inference speed is necessary. In this thesis, we study the updating problem for two kinds of tracking algorithms among the mainstream tracking approaches, and improve the robustness and accuracy. Firstly, we extend the siamese tracker with a model updating mechanism to improve their tracking robustness. A siamese tracker uses a deep convolutional neural network to obtain features and compares the new frame features with the target features in the first frame. The candidate region with the highest similarity score is considered as the tracking result. However, these kinds of trackers are not robust against large target variation due to the no-update matching strategy during the whole tracking process. To combat this defect, we propose an ensemble siamese tracker, where the final similarity score is also affected by the similarity with tracking results in recent frames instead of solely considering the first frame. Tracking results in recent frames are used to adjust the model for a continuous target change. Meanwhile, we combine adaptive candidate sampling strategy and large displacement optical flow method to improve its performance further. Secondly, we investigate the classic correlation filter based tracking algorithm and propose to provide a better model selection strategy by reinforcement learning. Correlation filter has been proven to be a useful tool for a number of approaches in visual tracking, particularly for seeking a good balance between tracking accuracy and speed. However, correlation filter based models are susceptible to wrong updates stemming from inaccurate tracking results. To date, little effort has been devoted to handling the correlation filter update problem. In our approach, we update and maintain multiple correlation filter models in parallel, and we use deep reinforcement learning for the selection of an optimal correlation filter model among them. To facilitate the decision process efficiently, we propose a decision-net to deal with target appearance modeling, which is trained through hundreds of challenging videos using proximal policy optimization and a lightweight learning network. An exhaustive evaluation of the proposed approach on the OTB100 and OTB2013 benchmarks show the effectiveness of our approach.

53.Determinants of Asymmetric Cost Behavior

Author:Yuxin Shan 2019
Abstract:Asymmetric cost behavior describes a non-linear association between the change of costs and change of sales. This dissertation consists of three papers identifying different determinants of the asymmetric cost behavior. In terms of the economic theory of sticky costs, the institutional theory and the “grabbing hand” theory, the first paper identifies three factors in China that increase the level of cost stickiness, including state ownership, the five-year government plan and the density of skilled labor. Using data from 34 OECD countries, the second paper provides empirical evidence showing that companies in high tax rate jurisdictions are more likely to have a greater level of cost stickiness than companies in low tax rate jurisdictions. Using the U.S. data, the third paper explores the association between high-quality information technology (IT) and the level of cost stickiness. Consistent with my expectation, empirical results show that high-quality IT weakens the asymmetric cost behavior. In addition, the third paper investigates the relationship between high-quality IT and audit efficiency, showing that high-quality IT enhances the audit quality and decreases audit fees. This study contributes to cost accounting literature by suggesting additional determinants affecting resource adjustment decisions of managers. Meanwhile, this study sheds light on the tax avoidance literature by providing empirical evidence for the effect of country-level statutory tax rates on “real” corporate decisions. This study also contributes to extant literature about the return to IT investments, showing that the quality of IT affect managers’ resource adjustment decisions and audit efficiency. Additionally, this study provides guidance for policymakers about how managers react to the change of government plans and regulations.


Author:Xiaokai Zhang 2020
Abstract: Environmental pollution has increasingly become a global issue in recent years. Heavy metals are the most prevalent pollutants and are persistent environmental contaminants since they cannot be degraded or destroyed. Environmental risk assessment (ERA) will pave the way for streamlined environmental impact assessment and environmental management of heavy metal contamination. Bioavailability is increasingly in use as an indicator of risk (the exposure of pollutants), and for this reason, whole-cell biosensors or bioreporters and speciation modelling have both become of increasing interest to determine the bioavailability of pollutants. While there is a great emphasis on metals as toxicants in the environment, some metals also serve as micronutrients. The same processes that introduce metals as pollutants into the environment also introduce metals that may function, in some cases, as micronutrients, which then have a role to play in eutrophication, i.e. excessive nutrient richness that is an impairment of many freshwater ecosystems and a prominent cause of harmful algal blooms. In this thesis, I cover a wide range of topics. A unifying theme is biological impacts of metals in the environment and what the implications are for environmental risk assessment. This thesis begins with my initial work in which I conducted laboratory experiments using a bioreporter, genetically engineered bacterial that can produce dose-dependent signals in response to target chemicals to test the bioavailability of lead (Pb) in aqueous system containing Pb-complexing ligands. Lead serves as a good model because of its global prevalence and toxicity. The studied ligands include ethylene diamine tetra-acetic acid (EDTA), meso-2,3 dimercaptosuccinic acid (DMSA), leucine (Leu), methionine (Met), cysteine (Cys), glutathione (GSH), and humic acid (HA). The results showed that EDTA, DMSA, Cys, GSH, and HA amendment significantly reduced Pb bioavailability to bioreporter with increasing ligand concentration, whereas Leu and Met had no notable effect on bioavailability at the concentrations tested.  Natural water samples from Lake Tai (Taihu) were also been studied which displayed that dissolved organic carbon in Taihu water significantly reduced Pb bioavailability. Meanwhile, the bioreporter results are in accord with the reduction of aqueous Pb2+ that I expected from the relative complexation affinities of the different ligands tested. These findings represented a first step toward using bioreporter technology to streamline an approach to ERA. Dissolved organic matter (DOM) plays an important role in both speciation modelling and bioavailability of heavy metals. Due to the variation of DOM properties in natural aquatic systems, improvements to the exiting standard one size fits-all approach to modelling metal-DOM interactions are needed for ERA. My next effort was to investigate variations in DOM and Pb-DOM binding across the regional expanse of Taihu. Results show that different DOM components are highly variable across different regions of Taihu, and bivariate and multivariate analyses confirm that water quality and DOM characterisation parameters are strongly interrelated. I find that the conditional stability constant of Pb-DOM binding is strongly affected by the water chemical properties and composition of DOM, though is not itself a parameter that differentiates lake water properties in different regions of the lake. The variability of DOM composition and Pb-DOM binding strength across Taihu is consistent with prior findings that a one-size-fits-all approach to metal-DOM binding may lead to inaccuracies in commonly used speciation models, and therefore such generalised approaches need improvement for regional-level ERA in complex watersheds. Based on the findings from the investigation of Pb-DOM complexation, I compared a one-size-fits-all approach to different methods of implementing site-specific variations in modelling. I was able to substantively improve the procedures to the existing speciation model commonly used in ERA applications. The results showed that the optimised model is much more accurate in agreement with bioreporter-measured bioavailable Pb. This streamlined approach to ERA that I developed has performed well in a first regional-scale freshwater demonstration. There is a close connection between environmental water and sediment contamination, and I also studied Pb bioavailability in lake sediemnt with a focus on the ramifications regarding environmental risk. For this work, I studied sediment samples from Brothers Water lake in the United Kingdom, a much simpler lake system than Taihu that is severly impacted by centuries of Pb-mining in the immediate vicinity. The results showed that the total concentration of Pb in the sediment has an inverse relationship with bioavailable Pb in the test samples, has a positive relationship with sediment particle size and sand content and a negative relationship with clay content. I find that the relative amount of bioavailable Pb in the lake sediments are low, although surface sediments may have much higher bioavailable Pb than deeper sediments. To address the issues of metals and other micronutrients on algal growth, I performed small-scale mesocosm nutrient limitation bioassays using boron (B), iron (Fe), cobalt (Co), copper (Cu), molybdenum (Mo), nitrogen (N) and phosphorus (P) on phytoplankton communities sampled from different locations in Taihu to test the relative effects of micronutrients on in situ algal assemblages. I found a number of statistically significant effects for micronutrient stimulation on growth or shift in algal assemblage. The most notable finding concerned copper, which, to my knowledge is unique in the literature. However, I am unable to rule out a homeostatic link between copper and iron. The results from my study concur with a small and emerging body of literature suggesting that the potential role of micronutrientss in harmful algal blooms and eutrophication requires further consideration in ERA and environmental management. The findings from this work are not only of interest to academics, but represent feasible approaches from which environmental practitioners may evaluate risk.   My work on Pb needs further validation, however would be validatable through impact assessment studies and is therefore directly and immediately extensible to environmental risk. I am therefore hopeful that my work on ERA will drive tangible outcomes in the work of environmental management. Likewise, though my work on the affect of micronutrients on algal growth is more fundamental than applied at present, there are important and immediate implications for environmental management: at present, copper is used as an algicide. My work suggests the long term effect of copper at 20 μg·L-1 could possibly encourage rather than inhibit harmful algal blooms. It is satisfying to arrive at a scientifically interesting, and at the same time practically useful outcome from my years’ of work, however, I hope that this and other similar work on risk and management interventions could inspire a shift to pollution prevention rather than “end of pipe” solutions.  

55.Learning and Leveraging Structured Knowledge from User-Generated Social Media Data

Author:Hang Dong 2020
Abstract:Knowledge has long been a crucial element in Artificial Intelligence (AI), which can be traced back to knowledge-based systems, or expert systems, in the 1960s. Knowledge provides contexts to facilitate machine understanding and improves the explainability and performance of many semantic-based applications. The acquisition of knowledge is, however, a complex step, normally requiring much effort and time from domain experts. In machine learning as one key domain of AI, the learning and leveraging of structured knowledge, such as ontologies and knowledge graphs, have become popular in recent years with the advent of massive user-generated social media data. The main hypothesis in this thesis is therefore that a substantial amount of useful knowledge can be derived from user-generated social media data. A popular, common type of social media data is social tagging data, accumulated from users' tagging in social media platforms. Social tagging data exhibit unstructured characteristics, including noisiness, flatness, sparsity, incompleteness, which prevent their efficient knowledge discovery and usage. The aim of this thesis is thus to learn useful structured knowledge from social media data regarding these unstructured characteristics. Several research questions have then been formulated related to the hypothesis and the research challenges. A knowledge-centred view has been considered throughout this thesis: knowledge bridges the gap between massive user-generated data to semantic-based applications. The study first reviews concepts related to structured knowledge, then focuses on two main parts, learning structured knowledge and leveraging structured knowledge from social tagging data. To learn structured knowledge, a machine learning system is proposed to predict subsumption relations from social tags. The main idea is to learn to predict accurate relations with features, generated with probabilistic topic modelling and founded on a formal set of assumptions on deriving subsumption relations. Tag concept hierarchies can then be organised to enrich existing Knowledge Bases (KBs), such as DBpedia and ACM Computing Classification Systems. The study presents relation-level evaluation, ontology-level evaluation, and the novel, Knowledge Base Enrichment based evaluation, and shows that the proposed approach can generate high quality and meaningful hierarchies to enrich existing KBs. To leverage structured knowledge of tags, the research focuses on the task of automated social annotation and propose a knowledge-enhanced deep learning model. Semantic-based loss regularisation has been proposed to enhance the deep learning model with the similarity and subsumption relations between tags. Besides, a novel, guided attention mechanism, has been proposed to mimic the users' behaviour of reading the title before digesting the content for annotation. The integrated model, Joint Multi-label Attention Network (JMAN), significantly outperformed the state-of-the-art, popular baseline methods, with consistent performance gain of the semantic-based loss regularisers on several deep learning models, on four real-world datasets. With the careful treatment of the unstructured characteristics and with the novel probabilistic and neural network based approaches, useful knowledge can be learned from user-generated social media data and leveraged to support semantic-based applications. This validates the hypothesis of the research and addresses the research questions. Future studies are considered to explore methods to efficiently learn and leverage other various types of structured knowledge and to extend current approaches to other user-generated data.

56.Analyses of Investment Bank-Affiliated Mutual Fund Performance

Author:Obrey Michelo 2022
Abstract:This thesis presents three distinct essays on investment bank-affiliated mutual funds. The essays contribute to the ongoing debate on the net impact of investment bank-mutual fund relationships on investor wealth maximization. I approach this issue by addressing the following three specific research questions: First, do mutual funds affiliated with investment banks deliver better investment performance to investors than non-affiliated mutual funds? Second, do investment banks add investment value to affiliated mutual funds? Finally, if so, what is the potential possible mechanism of investment value creation? In the first essay, I study the performance of U.S. domestic equity mutual funds managed by fund families affiliated with investment banks. My analysis based on various performance metrics shows that investment bank-affiliated mutual funds significantly outperform peer mutual funds. Consistent with the information advantage hypothesis, I find that the outperformance is more pronounced among affiliated mutual funds that hold stocks covered by their equity research divisions. Overall, my findings are consistent with the idea that investment banks strategically transfer performance to their affiliated mutual funds that benefit fund investors in an economically meaningful way. In the second essay, I investigate whether the investment banks' equity research divisions add investment value to affiliated mutual funds. I find that stocks covered by the affiliated equity research division outperform non-covered stocks within an investment bank-affiliated mutual fund's portfolio. Consistent with the information flow hypothesis, I find that the highly (lowly) held covered stocks outperform the highly (lowly) held non-covered stocks significantly. Furthermore, the results also reveal that newly purchased covered stocks significantly outperform the newly purchased non-covered stocks. Overall, these results suggest that investment banks' equity research divisions make a marginal contribution to affiliated funds by assisting fund managers in their covered stock selection and trading decisions. In the final essay, I explore how mutual funds affiliated with investment banks benefit from recommendations issued by their investment bank-affiliated analysts. Due to limitations in directly observing services or potential non-public information provided by the equity research division to investment bank-affiliated fund managers, I investigate this issue from a new direction. Specifically, I examine the investment bank-affiliated analyst recommendations that disagree profoundly with the consensus recommendations issued on stocks that at least one of the affiliated mutual funds holds in its portfolio. I find that the performance of the covered stocks is consistent (i.e., in the same direction) with investment bank-affiliated analysts’ dissent recommendation. Thus, investment bank-affiliated analysts’ dissent recommendations have investment value. I also find that dissent recommendations have more investment value when issued by the investment bank-affiliated analysts that are more experienced or employed by large and prestigious investment banks. My findings are consistent with the idea that the investment value of access to equity research division by investment bank-affiliated mutual funds is the highest when their sell-side analysts disagree profoundly with the consensus recommendations issued on covered stocks. Put together, the three essays' findings have significant implications. First, the findings improve the mutual fund investors and practitioners from mutual fund management firms' understanding of the functionality of investment bank-affiliated mutual funds. There is a need to revisit the oft-offered advice to prefer stand-alone funds over bank-affiliated funds. Second, the findings call for regulatory debate concerning the potential spillover effects between other businesses and mutual fund families affiliated with the investment bank. Indeed, there is a need for mutual fund regulators to facilitate better investor protection and a fairer market. Finally, the findings add to the literature investigating spillover effects in financial conglomerates offering multiple services.

57.Theoretical and Numerical study on Optimal Mortgage Refinancing Strategy

Author:Jin ZHENG 2015
Abstract:This work studies optimal refinancing strategy for the debtors on the view of balancing the profit and risk, where the strategy could be formulated as the utility optimization problem consisting of the expectation and variance of the discounted profit if refinancing. An explicit solution is given if the dynamic of the interest rate follows the affine model with zero-coupon bond price. The results provide some references to the debtors in dealing with refinancing by predicting the value of the contract in the future. Special cases are considered when the interest rates are deterministic functions. Our formulation is robust and applicable to all of the short rate stochastic processes satisfying the affine models.

58.Compressive Sensing Based Grant-Free Communication

Author:Yuanchen Wang 2022
Abstract:Grant-free communication, where each user can transmit data without following the strict access grant process, is a promising technique to reduce latency and support massive users. In this thesis, compressive sensing (CS), which exploits signal sparsity to recover data from a small sample, is investigated for user activity detection (UAD), channel estimation, and signal detection in grant-free communication, in order to extract information from the signals received by base station (BS). First, CS aided UAD is investigated by utilizing the property of quasi-time-invariant channel tap delays as the prior information for the burst users in internet of things (IoT). Two UAD algorithms are proposed, which are referred to as gradient based and time-invariant channel tap delays assisted CS (g-TIDCS) and mean value based and TIDCS (m-TIDCS), respectively. In particular, g-TIDCS and m-TIDCS do not require any prior knowledge of the number of active users like the existing approaches and therefore are more practical. Second, periodic communication as one of the salient features of IoT is considered. Two schemes, namely periodic block orthogonal matching pursuit (PBOMP) and periodic block sparse Bayesian learning (PBSBL), are proposed to exploit the non-continuous temporal correlation of the received signal for joint UAD, channel estimation, and signal detection. The theoretical analysis and simulation results show that the PBOMP and PBSBL outperform the existing schemes in terms of the success rate of UAD, bit error rate (BER), and accuracy in period estimation and channel estimation. Third, UAD and channel estimation for grant-free communication in the presence of massive users that are actively connected to the BS is studied. An iteratively UAD and signal detection approach for the burst users is proposed, where the interference of the connected users on the burst users is reduced by applying a preconditioning matrix to the received signals at the BS. The proposed approach is capable of providing significant performance gains over the existing algorithms in terms of the success of UAD and BER. Last but not least, since the physical layer security becomes a critical issue for grant-free communication, the channel reciprocity in time-division duplex systems is utilized to design environment-aware (EA) pilots derived from transmission channels to prevent eavesdroppers from acquiring users’ channel information. The proposed EA-pilots based approach possesses a high level of security by scrambling the eavesdropper’s normalized mean square error performance of channel estimation.

59.Public Participation in the Urban Regeneration Process - A comparative study between China and the UK

Author:Lei SUN 2016
Abstract:The primary aim of this research is to explore how the urban regeneration policies and practices are shaped by the larger social, political and economic structures respectively in China and the UK and how individual agents involved in the regeneration process formulate their strategies and take their actions and at the same time use discourses to legitimize their actions. It further probed the lessons could be learned by both countries from each other’s success or failure in implementing the regeneration initiatives. This thesis adopts a cross-national comparative strategy and intensively referenced the Variegated Neoliberalism, Neoliberal Urbanism and Critical Urban theory when developing its theoretical framework. The comparison was conducted at three levels. At national level, the evolution of urban regeneration and public participation policies and practices in both countries are compared; at city level, the neoliberal urban policies and their impacts on the development of two selected cities, which are respectively Liverpool in the UK and Xi’an in China are compared; at the micro level, the major players’ interactions and the discourses they used to underpin their actions in two selected case studies, which are the Kensington Regeneration in Liverpool and Drum Tower Muslim District in Xi’an are examined and compared. In carrying out the study, literatures regarding the transformation of urban policies in the two countries, detailed information in relation to the two selected cities and case studies are reviewed. Around 35 semi-structured interviews have been conducted. The research results had demonstrated the suitability of the Variegated Neoliberalism in explaining how the process of neoliberalization in both China and the UK are affected by non-market elements. It is found that the stage of economic development, the degree of decentralization, the feature of politics and the degree of state intervention in economic areas had played a significant role in shaping the unique features of urban regeneration policies in the two countries. In spite of the differences, similar trends towards neoliberalization could be found in the evolution of urban regeneration policies and practices in both countries, including the elimination of public housing and low-rent accommodation, the creation of opportunities for speculative investment in real estate markets, the official discourses of urban disorder as well as the ‘entrepreneurial’ discourses and representations focused on urban revitalization and reinvestment are playing significant roles in the formation and implementation of regeneration policies in both countries. Moreover, similar tactics are used by municipal government in both countries to conquer resistances from local residents. In the research, it is also found that the discourses used by the municipal government in describing the regeneration project is heavily influenced by the Neoliberal Urbanism, which is significantly different from that used by local residents who intensively referenced concepts from the Critical Urban theory. It is suggested that the Chinese government should from its British counterpart’s experience in introducing partnerships in delivering urban regeneration programs and at the same to learn how to use the formal venues to resolve conflicts resulted in physical regeneration programs. For the British government, lessons could be learnt from China’s successful experiences in decentralization and the empowerment of municipalities.

60.The impacts of supply chain design on firm performance: perspectives from leadership, network structure, and resource dependency

Author:Taiyu Li 2022
Abstract: Supply chain management has an important role to play in business. To keep a company competitive and survive in international competition, it is important to optimise the processes of production, supply and sales. Today, modern operating companies are faced with an ever-increasing amount of information and problems, and the relationships between supply chain participants are becoming increasingly complex. These problems are even more acute in developing countries. This calls for advanced supply chain design to manage the entire operational processes of the business. Especially in the post-epidemic era, business development in many factories has become unpredictable. Transport logistics and costs are difficult to manage and control accurately. The importance of supply chain resilience is becoming more and more evident. The field of supply chain research needs to break out of its old boundaries and seek more diverse development models and supply chain designs. The results of this thesis first reveal how supply chain leadership affects supply chain performance, providing a theoretical basis for building efficient supply chain. After that, it further reveals the relationship between the risk contagion efficiency of the supply chain network and the individual competitiveness. After that, this thesis emphasizes the impact of resource dependence and operational slack on improving supply chain resilience, and further provides guidance and suggestions for designing supply chain structures in the post-COVID era. Finally, the discussion of supplier relationship and innovation performance provides suggestions on how to improve competitiveness and innovation ability when designing supply chain structure. This thesis conducts a rigorous quantitative analysis of the factors that need to be considered in supply chain design from multiple perspectives and provides a sufficient discussion of their impact, which contributes to research in the field of supply chain management. This thesis also provides policy and corporate operational management guidance for practitioners related to supply chains.

61.Intelligent Global Maximum Power Point Tracking Strategies Based on Shading Perception for Photovoltaic Systems

Author:Ziqiang Bi 2021
Abstract:When a Photovoltaic (PV) system is partially shaded in the environment, the current-voltage (I-V) and power-voltage (P-V) curves exhibit multiple stairs/peaks and the locus of Maximum Power Point (MPP) varies over a wide range. Such Partial Shading Conditions (PSC) bring challenges to the Maximum Power Point Tracking (MPPT) systems. This thesis presents some novel shading information to characterize the complex PSC and MPPT techniques based on the shading perception. Shading information is the mathematical indicator to express the shading patterns. The existing shading information, such as shading rate and shading strength, has the limitations that they can only characterize the PSC with two irradiation levels. To improve the application range of the shading information, the shading matrix and shading vector are proposed in this thesis. The identification and detection methods for the proposed shading information are also included. Results from simulations and experiments have shown the effectiveness and accuracy of the proposed shading detection methods. Under PSC, the power characteristics of the PV systems are too complicated that there exist multiple MPPs. The traditional MPPT techniques may be trapped in the Local MPPs (LMPPs) instead of the Global MPP (GMPP). In this thesis, some novel methods are proposed to estimate the GMPP location from the detected shading information. The proposed MPPT techniques based on the shading perception are capable of tracking the GMPP fast and accurately. Simulations and experiments are conducted to validate the performance of the proposed MPPT methods with the comparison with some well-known MPPT methods.

62.Fibre Distribution Characterization and Its Impact on Mechanical Properties of Ultra High Performance Fibre Reinforced Concrete

Author:Lufan Li 2019
Abstract:Ultra-high performance fibre reinforced concrete (UHPFRC) is the most innovative cement based engineering material. It is also a big leap for the performance of this engineering material. The mechanical properties of UHPFRC not only depend on the properties of concrete matrix and fibres, but also depend on the interaction between these two elements. Moreover, this reaction is highly influenced by the fibre volume content distribution and fibre orientation distribution. Previous researchers had developed different methods to test the fibre distribution. However, apart from a genral fibre effciency reduction factor, there was no quantified relationship between the different fibre distribution and its corredponding mechanical performance. This research focuses on testing the fibre distribution and investigating their influences on the mechanical properties of UHPFRC. This research adopted the C-shape ferromagnetic probe inductive test.The effective depth of the magnetic probe was determined, and then this method was applied for testing the specimens with different thicknesses to obtain fibre volume content and fibre orientation angle. Image analysis was carried out on a number of specimens to prove the accuracy of the magnetic probe inductive test. Mechanical tests including compressive tests, uniaxial tensile tests and bending tests were carried out after the fibre distribution tests. The level of material performance enhancement is dependent on the fibre volume content and orientation angle. For tensile performance, the low dosage of fibres has little enhancement on the peak tensile/bending strength. Linear relationships can be found between the peak uniaxial tensile strength and fibre distribution with higher fibre dosages. This relationship was then further proved using the OpenSees programme. From the industrial point of view, over-dosing with fibres increases the construction cost. Furthermore, it may cause non-uniform fibre distribution and early concrete cracking. In order to improve the tensile behaviour of UHPFRC, adjusting the fibre orientation angle rather than simply increasing fibre volume content can be considered.

63.Green Supply Chain Management in Manufacturing Small and Medium‐sized Enterprises: Perspective from Chang Chiang Delta

Author:XiangMeng HUANG 2013
Abstract:This research started from an interest in how small and medium-sized enterprises (SMEs) in the manufacturing industry within the geographical area of Chang Chiang Delta in China operate with respect to sustainability by developing green supply chain management (GSCM). Therefore, the aim of this study is to investigate what the pressures are for SME manufacturers to implement GSCM practices, and to examine the relationship between those practices and corresponding performance at a regional level in the context of Chang Chiang Delta in China. To accomplish this task, a range of literature is evaluated, focusing on GSCM theories and adoptions. This review reveals a research gap regarding SMEs’ implementation of GSCM, to which this study responds. The research is underpinned by an interpretive epistemology and a multi-method design. It is an exploratory and empirical study with two rounds of primary data collection gathered from SME manufacturers in the Chang Chiang Delta region of China, which contains the triangular-shaped territory of Shanghai, southern Jiangsu Province and northern Zhejiang Province, including the urban cores of five cities – Shanghai, Nanjing, Hangzhou, Suzhou and Ningbo. In addition, a qualitative case study is employed in this research to provide more detailed information of GSCM implementation in SMEs. The results derived from both the questionnaire survey and the case study provide strong evidence that Chinese manufacturing SMEs have been under pressures relating to regulatory, customer, supplier, public and internal aspects from different stakeholder parties in terms of GSCM. In response to these pressures, SMEs have tried some GSCM practices, including green purchasing, eco-design, investment recovery, cooperation with customers and internal environmental management, and these practices are specific to the industrial sector considered in this study. But these practices do contribute to improving performance economically, environmentally and operationally. From the literature review and the empirical findings, this research provides contributions to knowledge, as well as managerial implications. It contributes to knowledge by providing conceptual and empirical insights into how GSCM is viewed and developed among SME manufacturers, clarifying the conceptions relating to sustainability, and incorporating stakeholder theory and the theory of industrial ecology in examining GSCM development. This study also provides practical implications by providing suggestions and guidance to governments, the public, suppliers and customers across the chain, as well as the managers of SMEs, and proposing an optimised model for the selected case for improved GSCM performance.

64.Middleware Techinques in A Rapid Response System in Wireless Sensor Networks

Author:Yuechun Wang 2021
Abstract:Wireless Sensor Networks (WSNs) are composed of embedded computers equipped with sensors, actuators and low-power radios that self-organise to form wireless networks capable of sensing the physical world. Despite their proven efficacy, modern WSNs uptake remains limited. This is primarily due to the applications' growing demand on the valid information extracted from the timeliness data flow and the complexity of data processing on the massive dynamic sensed data on motes. Two research problems could be addressed. The first one is how to ensure the effectiveness of the information extracted from the real-time sensed data; the second one is how to address the complex computing problem on the WSN nodes with limited computing ability. In the distributed Internet architecture, edge computing technology has outstanding performances in solving the problem of rapid network service response. It provides computing, storage, and network bandwidth near data source or users. Meanwhile, as an extension of cloud computing, the application of fog computing proposed by Cisco in the WSN environment has extensively exerted the effectiveness of collaboration among the massive WSN nodes, thereby improving the computing efficiency of the node clusters. Suppose the advantages of edge computing can be brought into play in the WSN scenario and combined with the characteristics of fog computing. In that case, the core problem, which is how to balance the trade-off between the responding time of a WSN system and the computing complexity of data processing, can be solved. Inspired by the research problems and potential solutions described above, the thesis explores several technologies that can take the characteristics of WSN and the requirements of logistics applications into account, such as hierarchical edge computing and Mobile Agent (MA). The hierarchical edge computing architecture proposed in the thesis in conjunction with the innovative rapid response strategy can ensure that under the premise of 80% coverage of real-time network nodes, data anomalies can be classified autonomously and responded within 20 sampling units, thereby reducing the delay caused by waiting for cloud computing results and decision communication. Besides, applying the mobile agent cooperation mechanism, a mobile agent-based middleware framework is presented in the thesis. Based on the randomly generated network topology, the middleware is tested under a variety of network scale with the node coverage by a MA applied patrol mechanism of above 98%.

65.Discriminative and Generative Learning with Style Information

Author:Haochuan Jiang 2019
Abstract:Conventional machine learning approaches usually assume that the patterns  follow the identical and independent distribution (i.i.d.). However, in many empirical cases, such condition might  be violated when data are equipped with diverse and inconsistent style information. The effectiveness of those traditional predictors may be limited due to the violation of the i.i.d. assumption brought by the existence of the style inconsistency. In this thesis,  we investigate how the style information can be appropriately utilized for further lifting up the performance of machine learning models. It is fulfilled by not only introducing the style information into some state-of-the-art models, some new architectures, frameworks are also designed and implemented with specific purposes to make proper use of the style information. The main work is listed as the following summaries: First, the idea of the style averaging is initially introduced by an example of an image process based sunglasses recovery algorithm to perform robust one-shot facial expression recognition task. It is named as Style Elimination Transformation (SET). By recovering the pixels corrupted by the dark colors of the sunglasses brought by the proposed algorithm, the classification performance is promoted on several state-of-the-art machine learning classifiers even in a one-shot training setting.  Then the investigation of the style normalization and style neutralization is investigated with both discriminative and generative machine learning approaches respectively. In discriminative learning models with style information, the style normalization transformation (SNT) is integrated into the support vector machines (SVM) for both classification and regression, named as the field support vector classification (F-SVC) and field support vector regression (F-SVR) respectively.  The SNT can be represented with the nonlinearity by mapping the sufficiently complicated style information to the high-dimensional reproducing kernel Hilbert space. The learned SNT would normalize the inconsistent style information, producing i.i.d. examples, on which the SVM will be applied. Furthermore, a self-training based transductive framework will  be introduced to incorporate with the unseen styles during training. The transductive SNT (T-SNT) is learned by transferring the trained styles to the unknown ones.  Besides, in generative learning with style information, the style neutralization generative adversarial classifier (SN-GAC) is investigated  to incorporate with the style information when performing the classification. As a neural network based framework, the SN-GAC enables the nonlinear mapping due to the nature of the nonlinearity of the neural network transformation with the generative manner. As a generalized and novel classification framework, it is capable of synthesizing style-neutralized high-quality human-understandable patterns given any style-inconsistent ones. Being learned with the adversarial training strategy in the first step, the final classification performance will be further promoted by fine-tuning the classifier when those style-neutralized examples can be well generated. Finally, the reversed task of the upon-mentioned style neutralization in the SN-GAC model, namely, the generation of arbitrary-style patterns, is also investigated in this thesis. By introducing the W-Net, a deep architecture upgraded from the famous U-Net model for image-to-image translation tasks, the few-shot (even the one-shot) arbitrary-style Chinese character generation task will be fulfilled. Same as the SN-GAC model, the W-Net is also trained with the adversarial training strategy proposed by the generative adversarial network. Such W-Net architecture is capable of generating any Chinese characters with the similar style as those given a few, or even one single, stylized examples.  For all the proposed algorithms, frameworks, and models mentioned above for both the prediction and generation tasks, the inconsistent style information is taken into appropriate consideration. Inconsistent sunglasses information is eliminated by an image processing based sunglasses recovery algorithm in the SET, producing style-consistent patterns. The facial expression recognition is performed based on those transformed i.i.d. examples. The SNT is integrated into the SVM model, normalizing the inconsistent style information nonlinearly with the kernelized mapping.  The T-SNT further enables the field prediction on those unseen styles during training. In the SN-GAC model, the style neutralization is performed by the neural network based upgraded U-Net architecture.  Trained with separated steps with the adversarial optimization strategy included, it produces the high-quality style-neutralized i.i.d. patterns. The following classification is learned to produce superior performance with no additional computation involved. The W-Net architecture enables the free manipulation of the style data generation task with only a few, or even one single, style reference(s) available. It makes the Few-shot, or even the One-shot, Chinese Character Generation with the Arbitrary-style information task to be realized. Such appealing property is hardly seen in the literature.

66.Multi-Task Learning with Convolutional Neural Networks

Author:Yizhang Xia 2018
Abstract:The CNN have achieved excellent performance in basic computer vision issues, such as, recognition and detection. However, the CNN is still an immature method, especially on multi-output classification. In traditional machine learning, the classic solution is MTL. The MTL was proposed early and has been an active topic. But, joint research on MTL and CNN are rarely mentioned. Fortunately, there is a successful integration of MTL and NN. And CNN is a typical NN. Especially, CNN is designed for computer vision. Based on the above situation, the mainly contributions of this thesis is the following three parts. Firstly, MTL and CNN is applied to face occlusion detection. This is the first time that MTL and CNN is used for detecting occluded face. The framework adopted the coarse-to-fine strategy, which consists of two CNNs. The first net is a region-based CNN detecting the head from a person upper body image while the second net is a multi-task CNN distinguishing which facial part is occluded from a head image. The experiment results prove that CNN can be integrated with MTL well. Secondly, MTL and CNN is used to jointly recognize vehicle logos and predict their attributes.In view of improving the performance of tasks, two MTL schemes, namely the adaptive weighted task learning and the switchable task learning, are proposed. To verify the algorithm, a large and realistic vehicle logo attributes dataset is prepared, which includes fifteen brands, labeled with six visual attributes and three no-visual attributes. Extensive experiments are conducted in two scenarios, equal priority learning and unequal priority learning, with promising accuracies. Thirdly, we propose a principled approach to design a evolutional tree-like multi-task deep learning framework which can be conveniently connected behind any well-known multi-class classification network and further improve their performance. Our approach starts with a basic multi-class deep architecture and dynamically deepens it during training using a criterion that groups similar tasks together. Extensive evaluation on multi-class classification datasets (MNIST and Cifar10) and multi-label prediction datasets (Berkeley Attributes of People dataset and CelebA) suggests that the models produced by the proposed method outperforms the strong baseline.

67.Bioavailability-based approach to understand the effects of metals as toxicants and nutrients: Implications for environmental management

Author:Boling Li 2022
Abstract:Environmental Management is a framing concept for the specific research topics in this thesis, and within that the work focuses on metals in the environment. Some of the work focuses on metals as toxicants, some on metal micronutrients, and some on metals that may be either, depending upon conditions. This thesis begins with work in which I developed a “two-in-one” whole-cell bioreporter approach to assess harmful effects of cadmium and lead. With the lights-on bioreporter’s unique two-in-one ability for speciation and toxicity measurement, in conjunction with the validated biotic ligand model, the bioreporter can predict toxicity endpoints over the range of the lowest Water Quality Criterion to the 50th rank-percentile of aquatic organisms sensitivity. In the context of dramatic environmental/biogeochemical change from metal pollution, relatively little work has been done on the role of micronutrients in influencing the development and progression of harmful algal blooms. In this thesis, I report results from mesocosm experiments with Microcystisand Desmodesmus spp., in mono- and mixed-cultures, to probe questions of how copper, iron, and copper-iron amendments affect the growth, short-term assemblage progression, and production of siderophore, chalkophore, and microcystin in lake water. The findings from this study are summarized: 1) copper-iron impacts on growth and community progression do not agree with lab-based findings. 2) Interplay between chalkophore/siderophore production supports a concept model wherein Microcystis spp. varies behavior to manage copper/iron requirements in a phased manner. In being able to specifically screen for chalkophores, I observed a previously unreported link between chalkophore and microcystin production that may relate to iron-limitation. 3) the lake water itself influences mesocosm changes; differentiated effects for iron regarding growth indicators and/or reduction of iron-limitation stress were found at a harmful algal bloom-free field station, likely a consequence of low bioavailability of iron in this station. My findings that Microcystis spp. varies behavior to manage copper/iron through the interplay between chalkophore/siderophore production and the previously unreported link between chalkophore and microcystin production addressed an important gap in furthering research on the effects of micronutrients bioavailability in natural water. Follow-up research with revised copper/iron amendments and increased level of algal acclimation was achieved. Similar to the initial work, I again saw a very similar dynamical phased behavior between chalkophore/siderophore production for Microcystis spp. that exhibited significant differences in trajectories according to specific differences in copper and iron amendments. The most interesting finding was that I observed a strong microcystin-chalkophore relationship again. Based on this research, I can say that chalkophore is a predictor of this cyanobacterial toxin production. While I discuss possible reasons for this new finding, it is previously undocumented, and I outline follow-up work that I believe would be fruitful to further elucidate the biological mechanisms underlying this behavior and why Microcystisspp. produce the toxin, microcystin.

68.Salient Object Detection and Segmentation in Video Surveillance

Author:Siyue Yu 2022
Abstract:Video surveillance outputs different portrait information of scenes such as crime investigation, security system, automatic driving system, and environmental monitoring. Recently, deep learning based video surveillance is also an essential topic in computer vision. The specific tasks include object tracking, video object segmentation, salient object detection, and video salient object detection. Thus, this thesis studies salient object detection and segmentation in video surveillance, mainly on video object segmentation and salient object detection. In video object segmentation, we study the case of given the first frame's mask and try to design a network that can adapt to different object appearance variations. Therefore, this thesis proposes a framework based on the non-local attention mechanism to localize and segment the target object in the current frame, referring to both the first frame with its given mask and the previous frame with its predicted mask. Our approach can achieve 86.5$% IoU on DAVIS-2016 and 72.2% IoU on DAVIS-2017, with a speed of 0.11s per frame. Then for salient object detection, this thesis focuses on scribble annotations. However, scribbles fail to contain enough integral appearance information. To solve this problem. A local saliency coherence loss is proposed to assist partial cross-entropy loss and thereby help the network learn more complete object information. Further, A self-consist mechanism is designed to help the network not sensitive to different input scales. Our method can achieve comparable results compared with fully supervised methods. Our method achieves a new state-of-the-art performance on six benchmarks (e.g. for the ECSSD dataset: Fβ = 0.8995, Eξ = 0.9079 and MAE = 0.0489). Lastly, co-salient object detection is also studied. Recent methods explore both intra- and inter-image consistency through an attention mechanism. We find that existing attention mechanisms can only focus on limited related pixels. Thus, we propose a new framework with a self-contrastive loss to mine more related pixels to obtain comprehensive features. Our method obtains 0.598 for maximum F-measure for COCA. In this way, the tasks in this thesis are well handled and our methods can serve as new baselines for future works.

69.Towards a theory of sharing economy-based service triad

Author:Dun Li 2020
Abstract:The sharing economy is a fast-growing phenomenon that has significantly disrupted traditional businesses. Sharing-economy businesses are involved in many sectors, such as the transportation, accommodation, labour, financial and food sectors. Thus, these companies have been considered important by both industrial and academic areas. Sharing-economy platform companies in different sharing-economy industries seek to constantly improve their “sharing” business to provide better services to users. Fundamentally, the nature of the sharing economy consists of three actors, the platform, service supplier and customer, forming a triadic structure within one specific sharing economy context. Among the main streams of service operations management research, it is surprising that, with a few exceptions, the role of platform service operations management in the sharing economy context has been ignored by researchers. Little is known about how sharing-economy platforms carry out their daily operations management in different sectors. To address this gap in the literature, four papers have been developed, including one literature review paper (conceptual paper) and three empirical papers. Seven unicorn level sharing-economy platform companies from three sharing-economy industries were selected for investigation in this research. They are DiDi and Uber China (ridesharing industry), OfO, Mobike and Hellobike (bike-sharing industry) and Huochebang and Yunmanman (logistics-sharing industry). By adopting different theories, such as balance theory, social capital theory, contingency theory, social exchange theory, information processing theory and the knowledge-based view, this study investigates different aspects of operations management under the sharing-economy context accordingly, such as the role of different platform strategies in sustainability, the influence of contingent factors on platform stickiness, bike-sharing platforms’ operations management and information management of the sharing-economy platform, and thus makes a significant theoretical contribution to the service operations management literature, providing insightful practical implications for sharing-economy platforms.

70.Power Line Communications over Time-Varying Frequency-Selective Power Line Channels for Smart Home Applications

Author:Wenfei ZHU 2014
Abstract:Many countries in the world are developing the next generation power grid, the smart grid, to combat the ongoing severe environmental problems and achieve e?cient use of the electricity power grid. Smart metering is an enabling technology in the smart grid to address the energy wasting problem. It monitors and optimises the power consumption of consumers’ devices and appliances. To ensure proper operation of smart metering, a reliable communication infrastructure plays a crucial role. Power line communication (PLC) is regarded as a promising candidate that will ful?l the requirements of smart grid applications. It is also the only wired technology which has a deployment cost comparable to wireless communication. PLC is most commonly used in the low-voltage (LV) power network which includes indoor power networks and the outdoor LV distribution networks. In this thesis we consider using PLC in the indoor power network to support the communication between the smart meter and a variety of appliances that are connected to the network. Power line communication (PLC) system design in indoor power network is challeng-ing due to a variety of channel impairments, such as time-varying frequency-selective channel and complex impulsive noise scenarios. Among these impairments, the time-varying channel behaviour is an interesting topic that hasn’t been thoroughly investi-gated. Therefore, in this thesis we focus on investigating this behaviour and developing a low-cost but reliable PLC system that is able to support smart metering applications in indoor environments. To aid the study and design of such a system, the characterisation and modelling of indoor power line channel are extensively investigated in this thesis. In addition, a ?exible simulation tool that is able to generate random time-varying indoor power line channel realisations is demonstrated. Orthogonal frequency division modulation (OFDM) is commonly used in existing PLC standards. However, when it is adopted for time-varying power line channels, it may experience signi?cant intercarrier interference (ICI) due to the Doppler spreading caused by channel time variation. Our investigation on the performance of an ordinary OFDM system over time-varying power line channel reveals that if ICI is not properly compensated, the system may su?er from severe performance loss. We also investigate the performance of some linear equalisers including zero forcing (ZF), minimum mean squared error (MMSE) and banded equalisers. Among them, banded equalisers provide the best tradeo? between complexity and performance. For a better tradeo? between complexity and performance, time-domain receiver windowing is usually applied together with banded equalisers. This subject has been well investigated for wireless communication, but not for PLC. In this thesis, we in-vestigate the performance of some well-known receiver window design criteria that was developed for wireless communication for time-varying power line channels. It is found that these criteria do not work well over time-varying power line channels. There-fore, to ?ll this gap, we propose an alternative window design criterion in this thesis. Simulations have shown that our proposal outperforms the other criteria.

72.Characterization of the cross-interactions between Deformed Wing Virus (DWV), honey bee, and ectoparasitic mite, Tropilaelaps mercedesae

Author:Yunfei Wu 2020
Abstract:Recently, honey bee colony losses have been reported to be associated with both presence of the pathogen Deformed Wing Virus (DWV) and ectoparasitic mites. The DWV vectoring role of Varroa destructor is well established while the role of Tropilaelaps mercedesae in viral transmission has not been fully investigated. In this project, I examined the effects of both mite species infestation on honey bee by comparing the DWV copy number and alteration of DWV variants of individual pupae and their infesting mites. Infestation with either mite species causes increased DWV copy number in honey bee pupae, which proves the vector role of V. destructor on honey bee and as well as suggesting a similar viral vectoring role for T. mercedesae. Through artificial infestation and wound induction experiments, a biological and mechanical vector role for T. mercedesae has been established. I also identified a positive correlation between DWV copy number in pupae and copy number in infesting mites, which forms two clusters with either high or low copy number in both honey bee pupae and infesting mites. The same DWV type A variant was present in either low or high copy number in both honey bee pupae and infesting V. destructor or T. mercedesae. These data suggest a previously proposed hypothesis that DWV suppressed the honey bee immune system when DWV copy number reaches a specific threshold, promoting greater replication. Tropilaelaps mercedesae infestation induces Hymenoptaecin and Defensin-1 expression in honey bee pupae; however, they are associated with two independent events, mite feeding activity and DWV replication, respectively. DWV can be transmitted from honey bee to mite via intake of fat body or other tissues through feeding activity, which is supported by the observation of accumulated DWV in the mite's intestinal region. During feeding activity, induced Hymenoptaecin is ingested by mite as well and it has a negative role, down-regulating vitellogenin synthesis, which further influences mite's reproductive capability. Hymenoptaecin expression induced by mite feeding exerts the negative feedback on the mite reproduction, and may help establishing an equilibrium between host (honey bee) and parasite (mite). I also explore the critical factors for DWV infection/replication, including 1) the host with A/T-rich genome and a skewed codon usage; 2) an intact accessible VP1-P domain on the viral virion; and 3) certain factors critical for viral replication and at least exclusively present in honey bee rather than V. destructor, T. mercedesae nor C. sonorensis.

73.Media Management and Disruptive Technology: The Nigerian Newspaper Industry Today

Author:Nelson Omenugha 2020
Abstract:New media technologies have brought about radical changes in the contemporary mass communication landscape. An important aspect of these changes which is currently provoking much interest concerns how these technologies are redefining and disrupting the operations, ethos and tastes of the old media, thus challenging the future of the traditional media institution. The Nigerian newspaper industry, like others elsewhere, is caught up in this new reality as new media technologies and the attendant alternative news sources increasingly gain footing in the country. This study, therefore, examines how newspaper managers in Nigeria, to secure their future in the new dispensation, have been responding to these urgent challenges posed by new media technologies. The research is anchored within various theories: Technological Determinism (TD), Disruptive Technology (DT), Diffusion of Innovation and Technology Acceptance Model (TAM) and puts forward the “Techno-Human Dynamism” model, as it seeks answer to the main research question: What are the observable trends in the management of Nigerian newspapers at a time when new media technologies are posing a challenge to the survival of traditional newspapers? Adopting a mixed qualitative research approach - Key Informant Interview (KII) and Focus Group Discussion (FGD), the study focuses on four major Nigerian daily newspapers - The Sun, The Nation, The Daily Trust and The Daily Times - as well as the newspaper readers of these daily newspapers. Three managerial personnel of each of the selected newspapers were interviewed, while Focus Group Discussion (FGD) of four sessions comprising six discussants each were conducted among newspaper readers in each of four purposively selected cities - Aroma junction (Awka, Anambra), Ojota junction (Ikeja, Lagos), Sky Memorial junction (Wuse, Abuja) and Rumukoro junction (Port Harcourt, Rivers) -  across the country. Employing the thematic method of data analysis, the study found that Nigerian newspapers, like their counterparts elsewhere, are already experiencing the disruptive impact of new media technologies in all major areas of their operations including content, human resources and revenue. These disruptive impacts appear to be strengthening rather than merely weakening the newspaper organisations. The newspapers in response to them have become more creative, more ethical - volatising factual, accurate, investigative and analytical reporting. These are issues that had hitherto posed huge ethical concerns about Nigerian journalism. Moreover, the hybridization (integration) of the new and old media as one of the coping strategies seems to add further strength to the newspapers as they poach on the strengths of the new media to complement the weaknesses of the old.  However, the newspaper managers still have some latitude to secure the future of the industry given the untapped potential of the industry both in the traditional and online sense. The study recommended that Nigerian newspapers should endeavour to keep pace with the technological innovations driving today’s newspaper industry while boldly considering other response strategies that have worked elsewhere - including journalistic co-operatives, mergers and conglomeration - towards arresting the dwindling fortunes of the industry.

74.Novel Numerical and Computational Techniques for Remote Sensor-Based Monitoring of Water Quality

Author:Xiaohui Zhu 2020
Abstract:Monitoring water quality in real time is one of essential measures for water environment management. Recent advances in information technology and sensor systems have catalysed the progress in remote monitoring of water quality using wireless sensor networks (WSNs). Much research has been carried on the optimal design of water quality monitoring networks, detection of anomaly water quality data and approach innovation of water quality monitoring to save the cost of building and operating water quality monitoring network, expand the monitoring area and improve the monitoring efficiency. A large number of optimization algorithms have been proposed to optimize water quality monitoring networks. Most of these algorithms consider unidirectional water flow and try to obtain global optimization monitoring networks without the consideration of special monitoring locations. This thesis studies optimization algorithms to design optimized water quality monitoring networks for bidirectional river systems. Reserved monitoring locations are also considered to satisfy particular monitoring requirements. Four optimization objectives of minimal pollution detection time, maximal pollution detection probability, maximal centrality of monitoring locations and reserved monitoring locations are considered. With the comparison of computing performance and Pareto frontier to several optimization algorithms, we propose a Constrained Multi-Objective Discrete Particle Swarm Optimization (CMODPSO) with new approaches to initialize particles and compute particles' velocities and positions during computing iterations. Experimental results show that the CMODPSO can obtain optimized water quality monitoring networks with reserved monitoring locations as well as satisfying other optimization objectives. Influenced by external interferences such as wild environment, sensor hardware errors and communication disturbance, there is a high probability that the water quality data collected by remote water quality monitoring networks is corrupted. It is a crucial challenge to detect and filter anomaly water quality data in real time during the monitoring. We propose a novel anomaly detection algorithm based on dual time-moving windows, which can detect anomaly water quality data in real time. Compared to other anomaly detection algorithms such as anomaly detection and mitigation (ADAM) and anomaly detection (AD), it can significantly improve the anomaly detection performance. To improve monitoring approaches from fixed-point monitoring to surface monitoring and expand the monitoring area, we develop an unmanned surface vehicle (USV) for water quality monitoring. An algorithm of Improved Angle Potential Field Method (IAPFM) is proposed for autonomous navigation and obstacle avoidance. A heading control algorithm is developed based on the proportional-integral-differential (PID) control. Experimental results show that the USV can autonomously navigate in a complex river system according to a predefined navigation route. In addition, it can also detect and avoid obstacles around the USV during navigation as well as collect water quality data in real time, which significantly improve the monitoring efficiency, expand the monitoring area and save the cost for building monitoring stations.

75.Body-centred Interactive Textiles for Emotion Regulation

Author:Mengqi Jiang 2022
Abstract:Textiles have always been an important carrier for people to express their emotions. Wearable technology brings the power of computing closer to the body, which has led to the rising focus of bodily emotion-related interactive textiles design. Body motion-engaged affective design for emotion regulation also become a trend in games and interactive art, but the advantages of interactive textiles in this direction have not been thoroughly studied. This research aims to answer the overarching question: What are the key factors influencing body-centred interactive textiles design for emotion regulation? First, we conducted a comprehensive review of the literature on emotion-related interactive textiles design and combed the current state of body-centred interactive textiles and the opportunities that exist. From this, we discussed its research directions in (1) exploring the materiality of interactive textiles, (2) integrating body motion into interaction, and (3) creating sensory stimuli with body-centred interactive textiles. Then, we investigated the research question following the Research through Design approach and verified the feasibility of regulating emotion through body-centred interactive textiles. Five sub-research questions were identified to concrete the research direction. In the design research and practice, first, we identified effective gross movement-based interactive textiles with multi-sensory feedback mechanisms for emotion regulation and created the E-motionWear prototype for participants to test in the lab and real life. Then, we presented GesFabri, a collection of five interactive textile interfaces with distinct textures. They were created to investigate the intuitive interaction gestures of textile texture and the emotional effects of the fine movement-based interfaces with four sensory feedback mechanisms. Last, we developed iPillowPal, an affective movement-based interaction pillow for emotional communication and engagement between long-distance relationship (LDR) partners. We conducted a user study, lab study, and field study to achieve this goal. The results revealed that (1) wearing the gross movement-based interactive textiles positively impacted the users’ immediate emotion regulation, and users presented a more positive attitude towards their work. (2) both fine movement-based interaction on textiles and the feedback mechanism influenced user emotions, also highlighted the potential of materiality in designing interactive textiles, (3) affective movements vary on the scenario, serve a specific emotional purpose, and generate a particular emotional response. The affective movement-based interactive textiles shortened the emotional distance of LDR couples and improved their emotional state. Multi experimental results show that several emotion regulation strategies can be applied to body-centred interactive textiles design. Based on the findings, we summarized our research through the lenses of body movement, feedback mechanism, textile materiality, and emotion types, then reflected on the research approach and methods, design implication, and proposed future work directions. This study offers a new perspective on the potential of emotion regulation with interactive textiles and will contribute to a deeper understanding of body-centred affective design.

76.Electric Vehicle Energy Management Considering Stakeholders' Interest in Smart Grids

Author:Bing Han 2020
Abstract:With the electri fication in transportation systems, Electric Vehicles (EVs) have developed rapidly in recent years. At the same time, with large-scale EV integration to power grids, the charging behaviours of EVs bring both challenges and opportunities to power grids operation. This thesis focuses on the EV energy management in smart grids, and the EV energy management problem is studied considering three stakeholders' interests, i.e. EV owner, aggregator and grid, respectively. First, the economic relationship between EV owners and the aggregator is studied (EV owners' and aggregator's interest). Two multi-objective optimisation methods are applied to investigate the economic relationship between these two stakeholders and the aggregator{owner economic inconsistency issue is presented. To mediate this issue, a rebate factor is proposed in the model. The results show that a signi ficant reduction in the EV owners' charging fee from self-scheduling can be achieved while the aggregator profi t is maximised. Second, the EV aggregator bidding strategy in the electricity market is studied (aggregator's interest). By jointly considering the reserve capacity in the day-ahead market and the uncertainty of reserve deployment requirements in the real-time market, a scenario-based stochastic programming method is used to maximise the expected aggregator pro fit. The risk of the deployed reserve shortage is addressed by introducing a penalty factor in the model. In addition, an owner-aggregator contract is designed to mitigate the economic inconsistency issue between EV owners and the aggregator. The results show that the expected aggregator profi t is guaranteed by maximising reserve deployment payments and mitigating the penalties and thus the uncertainty of the reserve market is well managed. Third, the EV integration in a transmission system is studied (grid's interest) to achieve the coordination between generators and EVs. To tackle the challenge of large-scale EV integration problem, a bi-level scheduling strategy is proposed. The bi-level strategy clearly de fines the responsibility of transmission system operator and the aggregator. An EV information grouping method is designed, which could efficiently tackle the optimisation complexity problem. In addition, a detailed EV battery charging model is built. The results show that the total cost of the systems is minimised and EVs could shave the peak and fill the valley loads. This thesis discusses the EV energy management problem considering three stakeholders' interests, respectively. The proposed strategies in this thesis clearly evaluate and de fine the economic relationships and responsibility among EV owners, aggregator and the grid in managing EV charging and discharging behaviours. Based on three case studies conducted in this thesis, EV energy management could bene t the stakeholders as follows: (1) the EV owner charging fee is minimised while their driving requirements are satis ed; (2) the aggregator profi t is maximised by participation in the electricity market; (3) the cost of the system is minimised by achieving the coordination between EVs and generators.

77.Investigating the Effects of Simian Retrovirus (SRV) Infection on the Autophagic Pathway, Apoptotic Pathway and m6A RNA Methylation in Jurkat Cells

Author:Jingting Zhu 2019
Abstract:Simian type D retrovirus (SRV) is an etiological agent for the fatal simian acquired immunodeficiency syndrome (SAIDS), which mainly infects Asian macaques and leads to varying degrees of immunosuppression. Until now, little was known about the underlying pathogenic mechanisms of SRV infection. Especially, the effects of SRV infection on T lymphocytes, the major host cells of SRV, are still largely unclear. Apoptosis and autophagy are two important evolutionarily conserved host immune defense pathways against viral invasion and mediate viral pathogenesis. In addition, in the last decade, a growing number of studies has revealed the emerging roles of m6A RNA modification in regulating the viral infection and virus-host cell interactions. Therefore, the aims of this thesis are to investigate the effects of SRV infection on the autophagic pathway, apoptotic pathway and m6A RNA modification in Jurkat T lymphocytes (Jurkat cells). The capacities of SRV infection and replication in Jurkat cells were also assessed. The results showed that both SRV-4 and SRV-8, the major SRV subtypes circulated in the macaque breeding colonies in China, were able to infect and replicate in Jurkat cells. In addition, both SRV-4 and SRV-8 infection have been shown to induce autophagy and apoptosis in Jurkat cells. The results demonstrated that SRV-4/SRV-8 infection was able to enhance the formation of autophagosome as well as to increase the completed autophagic flux in Jurkat cells. Moreover, the levels of activated caspase-3 and caspase-8 and apoptosis were significantly increased in Jurkat cells by SRV-4/SRV-8 infection. In addition, the SRV-8 infection-induced autophagy was shown to inhibit SRV replication and promote apoptosis in Jurkat cells. Inhibition of autophagy by knockdown of Beclin1 in SRV-8-infected Jurkat cells was shown to significantly increase the amount of SRV genome released in the culture medium, as well as to significantly decrease the levels of caspase-3/-8 activation and inhibit apoptosis. Interestingly, further investigations on the interaction between LC3 and procaspase-8 in SRV-8-infected Jurkat cells suggested that the autophagosomes was, at least partially, involve in the process of caspase-8 activation. In addition, the results in this thesis also showed that SRV-8 viral RNAs in the infected Jurkat cells contain six distinct m6A peaks. Moreover, SRV-8 infection was shown to decrease the global m6A level in Jurkat cells, as well as to reprogramme the Jurkat cellular m6A epitranscriptome. Interestingly, depletion of ALKBH5, an m6A “eraser”, or YTHDF1, an m6A “reader”, in the infected Jurkat cells was demonstrated to significantly decrease SRV-8 replication, suggesting the regulatory roles of m6A modification and the components of the cellular m6A machinery in SRV replication. The results in this thesis have revealed for the first time the effects of SRV infection on the autophagic and apoptotic pathways as well as on the m6A RNA methylation in Jurkat cells, which have the potential to provide novel insights for the development of new antiviral therapies.

78.Automated Certification of Online Auction Services

Author:Wei BAI 2016
Abstract:Auction mechanisms are viewed as an effcient approach for resource allocation and different types of auctions have been designed to allocate spectrum, determine positions for advertisements on web pages, and sell products on the Internet, among others. Online auctions can be implemented as an intermediary for both sellers and buyers in agent-mediated e-commerce systems. This raises two concerns. Firstly, the automation of online auction trading requires buyer agents to understand the auction protocol and have the ability to communicate with the seller agents (i.e., the auctioneer). Secondly, buyer agents need to automatically check desirable properties that are central to their decision making.To address both concerns, we have proposed a certification framework to enable soft-ware agents automatically verify some desirable properties of a specific auction through a formally designed communication protocol, and then make decisions according to the result of the communication. Furthermore, we have extended the communication mechanism to the area of Semantic Web Service composition and have explored the verification of combinatorial auction mechanisms. To demonstrate our approach, we have modelled online auctions as web services and have applied the technique of Semantic Web Service to represent auction protocols. Then we rely on computer-aided verification techniques to construct and check formal proofs of desirable properties for specifc auctions. Finally, dialogue games are proposed to enable decision making and service compositions for software agents.

79.Market risk management in WTI crude oil market

Author:Zihao Gong
Abstract:This research focuses on risk management in WTI crude oil market. It starts by discovering the underlying shocks that drive crude oil dynamics and how each shock can explain the price changes. After identifying the sources of the price dynamics, the third chapter focuses on constructing appropriate value-at-risk (VaR) risk models to measure the market risk of oil. The last part of the research utilizes the findings in chapters two and three to construct highly efficient hedging strategies for risk management purposes. Therefore, the links between each topic are progressive. The second chapter explores the drivers of oil price dynamics in the futures market from an economic view using the restricted vector autoregressive (VAR) model. The VAR approach is applied to decompose the futures prices into three components: supply shocks, demand shocks, and precautionary demand shocks. The level of the impact of the exogenous shocks on oil price dynamics is found to be time-varying and different in multiple economic events. In general, we find that the real demand shock plays a dominant role in determining the oil futures prices, followed by precautionary demand shocks and supply shocks. After understanding the sources of the market risks in the oil market, the third chapter solves how to measure the crude oil market risks accurately. This chapter measures the accuracy of the VaR model among the candidates for risk measurement in the oil futures market. A more flexible parametric distribution model is proposed in combination with GARCH models. We show that the newly proposed FIGARCH-SGT model improves the accuracy of VaR estimates compared with the competitors in the literature. The semi-parametric conditional POT model is less affected by the specification of volatility models and produces substantial forecasting accuracy. The fourth chapter constructs an optimal hedging strategy to manage market risks with oil-specific features discovered in previous chapters. We estimate the optimal hedge ratio based on newly proposed static and dynamic hedging models. It is found that static hedge performs better in risk reduction, especially in economic turmoil. In contrast, the dynamic hedge performs better in acquiring risk-adjusted returns, especially when the market is stable. Moreover, the term structure of the market: contango and backwardation are found to have a significant impact on the hedging models' performance.

80.Matching and Segmentation for Multimedia Data

Author:Hui Li 2022
Abstract:With the development of society, both industry and academia draw increasing attention to multimedia systems, which handle image/video data, audio data, and text data comprehensively and simultaneously. In this thesis, we mainly focus on multi-modality data understanding, combining the two subjects of Computer Vision (CV) and Natural Language Processing (NLP). Such a task is widely used in many real-world scenarios, including criminal search with language descriptions by the witness, robotic navigation with language instruction in the smart industry, terrorist tracking, missing person identification, and so on. However, such a multi-modality system still faces many challenges, limiting its performance and ability in real-life situations, including the domain gap between the modalities of vision and language, the request for high-quality datasets, and so on. Therefore, to better analyze and handle these challenges, this thesis focuses on the two fundamental tasks, including matching and segmentation. Image-Text Matching (ITM) aims to retrieve the texts (images) that describe the most relevant contents for a given image (text) query. Due to the semantic gap between the linguistic and visual domains, aligning and comparing feature representations for languages and images are still challenging. To overcome this limitation, we propose a new framework for the image-text matching task, which uses an auxiliary captioning step to enhance the image feature, where the image feature is fused with the text feature of the captioning output. As the downstream application of ITM, the language-person search is one of the specific cases where language descriptions are provided to retrieve person images, which also suffers the domain gap between linguistic and visual data. To handle this problem, we propose a transformer-based language-person search matching framework with matching conducted between words and image regions for better image-text interaction. However, collecting a large amount of training data is neither cheap nor reliable using human annotations. We further study the one-shot person Re-ID (re-identification) task aiming to match people by offering one labeled reference image for each person, where previous methods request a large number of ground-truth labels. We propose progressive sample mining and representation learning to fit the limited labels for the one-shot Re-ID task better. Referring Expression Segmentation (RES) aims to localize and segment the target according to the given language expression. Existing methods jointly consider the localization and segmentation steps, which rely on the fused visual and linguistic features for both steps. We argue that the conflict between the purpose of finding the object and generating the mask limits the RES performance. To solve this problem, we propose a parallel position-kernel-segmentation pipeline to better isolate then interact with the localization and segmentation steps. In our pipeline, linguistic information will not directly contaminate the visual feature for segmentation. Specifically, the localization step localizes the target object in the image based on the referring expression, then the visual kernel obtained from the localization step guides the segmentation step. This pipeline also enables us to train RES in a weakly-supervised way, where the pixel-level segmentation labels are replaced by click annotations on center and corner points. The position head is fully-supervised trained with the click annotations as supervision, and the segmentation head is trained with weakly-supervised segmentation losses. This thesis focus on the key limitations of the multimedia system, where the experiments prove that the proposed frameworks are effective for specific tasks. The experiments are easy to reproduce with clear details, and source codes are provided for future works aiming at these tasks.

81.Morphodynamics of Fence-dune Systems

Author:Qingqian Ning 2021
Abstract:Aeolian processes remove nutrients from arable land, pose potential threats to human respiratory systems, and deteriorate infrastructure and natural habitats. Fence have widely used to control the aeolian processes in two ways: to reduce the wind speed so to mitigate the erosion, or to initiate dunes to protect area of interest. The wind reduction effect of fence has been intensively studied, while the sand trapping effect of the fence needs to be further investigated. A new laser sheet sensor to measure the instantaneous aeolian flux. In laboratory calibration it's found that the laser sheet sensor has a short response time, good consistency, and a higher saturation limit. The laser sheet sensor should be calibrated on site and the field performance is not as expected. The efficiency of fence in wind speed reduction and sand trapping is determined by many parameters of the fence, including fence height, length, width, opening size, porosity, and opening distribution. The impact of fence height on the sand trapping capacity is important but not investigated. Fence with three different heights were deployed in the field and the dune development parameters were record. The results showed that there are two stages during the development. In stage I, the dune grew vertically and in stage II, the dune expanded horizontally. Also, the maximum dune height is proportional to the fence height. Porosity is a critical parameter in the fence design while the configuration of the fence with same porosity was not investigated. Field experiment was conducted on a beach with eight different fence configurations. A Terrain laser scanner was applied to measure the dune morphology during the dune development. Result show that the fence configuration influence not only where the embryo dune emerged, but also how the final dune would appear. The results of the 3-D dune morphology showed that a length-height ratio of 10 or over is adequate to keep the lateral edge effect negligible at the central profile. Also, it's found that the smaller the opening size, the higher the merging point of the windward dune and the leeward dune. In order to investigate the interactions in the wind-sand-fence-dune system, the 2-dimensional wind field should be investigated. However, since the cost to deploy multiple towers of anemometers is too high, a wind tunnel measurement was taken on the fence/dune model with model-prototype ratio of 1:10. The results showed that the wind reduction effect of all the fence can be neglected at the height of two times fence height, while wind reduction effect from the surface to two times fence height varies significantly. The upper denser fence was the only fence with a considerable region with severely reduced wind speed. The wind field at the windward side was similar and that of the leeward varied significantly. Moreover, the wind-sand-fence-dune system was in a negative feedback response loop until the system reached a dynamic equilibrium. It's found that the wind near the dune profile of the upper denser fence and lower denser fence was above 3 m/s. It seems that equilibrium was reached for those two fences. For other dunes of the upper denser fence and lower denser fence, there existed regions with wind velocity at 2 m/s or lower, in which the particles are prone to deposit and the dune surface are prone to develop further.

82.The mechanisms to regulate arsenic behaviors in redox transition zones in paddy soils

Author:Zhaofeng Yuan 2020
Abstract:Rice (Oryza sativa L.) is the staple food for people especially in Asia, but rice production is threatened by arsenic (As) contamination in paddy soil. Contamination of As in paddy soil is mainly caused by anthropogenic activities, such as mining and irrigation of high As groundwater. External As firstly enters overlying water, and then accumulates in paddy soil. Soil-water interface (SWI) is the gate controlling As exchange between soil and overlying water, and rhizosphere is the inlet of As from soils into rice root. Under natural conditions, a redox transition occurs along both micro interfaces due to atmospheric O2 diffusion or radial O2 loss from root. Arsenic is sensitive to redox conditions and tends to change over space and time across those micro interfaces. However, a deep understanding of As cycling in paddy water-soil-rice system has been hindered to date by techniques available to sample micro interfaces repeatedly in high-resolution. In order to fill this gap, a novel high-resolution porewater sampler was developed in this study. Using the technique, the spatiotemporal control of As was studied at paddy SWI and rhizosphere. A hollow fiber membrane tube (~ 2 mm diameter) was evaluated to sample dissolved elements with passive diffusion mechanism. The results showed quantification of solutes surrounding the tube can be achieved in every ≥ 24 h regardless of pH, ionic strength, and dissolved organic matter conditions. This technique, called In-situ Porewater Iterative (IPI) sampler, was further validated in soils under an anoxic-oxic transition by bubbling N2 and air into overlying water. The results showed that the IPI sampler is a powerful and robust technique in monitoring dynamics of element profile in soil porewater in high-resolution (mm). Moreover, measurement methods in ICP-MS and IC-ICP-MS were optimized to promote the measurement throughput of multi-element in limited samples (μL level) collected by high-resolution porewater samplers (e.g. IPI samplers). Major elements (e.g. iron (Fe) and manganese (Mn), mg·L-1 level) were measured by ICP-MS in extended dynamic range mode to avoid signal overflow, while trace elements (e.g. As, μg·L-1 level) in dynamic reaction cell (O2) mode to alleviate potential polyatomic interferences. Ammonium bicarbonate mobile phase was further demonstrated to simultaneously measure common species of As, phosphorus (P) and sulfur (S) with IC-ICP-MS analysis. With the optimized analytical methods and IPI samplers, the measurement throughput of multi-element and their species were improved up to 10 times compared to traditional methods. Furthermore, the cycling of As across SWI and rhizosphere was studied with the updated IPI sampler and state-of-art analytical techniques. In SWI, profiles of As, Fe and other associated elements in five paddy soils were mapped. The results showed a close coupling of Fe, Mn, As and P in 4 out of 5 paddy soils. However, decoupling of Fe, Mn and As was observed in the oxic-anoxic transition zone of one paddy soil. The study provided in situ evidence showing decoupling of As with Fe and Mn may happen in the oxic-anoxic transition zone of SWI. For rhizosphere, dynamic profiles of Fe and As were mapped by IPI samplers from days after transplanting 0 to 40. The results showed Fe and As change spatiotemporally in rhizosphere. Interestingly, Fe oxides formed in rhizospheric soil, rather than on rice root (Fe plaque), play the key role for immobilizing mobile As from bulk soil. A model of As transport from soil to rice, linking the temporal and spatial regulation of As in paddy soils, was provided to help better understand As cycling in paddy soils.

83.Extensible and Explainable Lifelong Machine Learning Architecture for Double-Track Fine-Grained Sentiment Analysis

Author:Xianbin Hong 2022
Abstract:Lifelong machine learning aims to accumulate knowledge in a lifetime and use its knowledge to solve various tasks. It keeps knowledge while solving problems to improve its ability to handle different tasks. Past knowledge contributes to new task solving, and old tasks also benefit from new knowledge. Lifelong machine learning is a great vision for artificial intelligence rather than a specific algorithm. Any method could be a part of lifelong learning if it can contribute knowledge to solve other tasks or leverage outside knowledge to conduct a new task. Many famous learning paradigms like “transfer learning”, “learn to learn” etc., are components of lifelong learning. Although researchers have worked decades for lifelong learning, there still are many gaps. To advance lifelong learning, this thesis aims to discuss the scalability and knowledge validation issues. To make readers have a direct understanding of lifelong learning, the author chooses fine-grained sentiment analysis as an example for discussion. Chapter 1 will introduce the motivation and research questions in detail. Then Chapter 2 reviews the history of lifelong and its definition to give readers a better understanding. This chapter also raises some issues of lifelong learning. Chapter 3 will introduce a deep neural network-based lifelong learning approach for Amazon product review sentiment classification. Compared with the non-lifelong deep learning method, the lifelong learning approach improves the F1 score of sentiment classification on the negative class from 67.78% to 78.84%. Its time complexity reduces to O(n), which is a significant improvement compared to the O(n2) complexity of the previous research. This chapter proposes to leverage Knowledge distillation to reduce the model size in real-time tasks. The lifelong learning architecture shows good scalability and can handle 10,000 tasks in real-time at an affordable cost. This architecture also considered knowledge validation to achieve better performance. This thesis categorizes knowledge in lifelong learning in two forms: implicit knowledge and explicit knowledge. Implicit knowledge is that humans cannot directly understand, like the parameters of deep neural networks in chapter 3. On the contrary, people prefer explicit knowledge because it is more explainable. So chapter 4 use fine-grained sentiment analysis as an example to discuss how to obtain and maintain explicit knowledge for lifelong learning. The fine-grained sentiment analysis needs the knowledge of product features and people’s attitudes to these features. It is much more complex than the sentiment classification task in chapter 3. Lacking knowledge, the deep learning approach’s classification accuracy is only 47% on the Twitter product review test dataset. The author uses entity recognition to detect the product feature and use reinforcement learning to learn people’s attitudes toward each feature to solve the problem. The reinforcement learning approach can also monitor the change of knowledge and evaluate the reliability of knowledge. This explicit knowledge-based approach can explain why the model provides a prediction and whether the decision is reliable. It can provide consumers with a statistic report to show how people feel about each feature. Different customers have different demands, so they need to know whether each feature of the product satisfies their demand before purchase. Knowledge validation also can tell the researchers whether the knowledge is reliable to use. With the help of explicit knowledge, the classification accuracy on the Twitter product review test dataset rises from 47% to 72%, which is a significant improvement. As explicit knowledge has better explainability, the author expects to use explicit knowledge as much as possible. However, the collecting of explicit knowledge needs time, so implicit knowledge still is necessary and valuable in practice. In the way of lifelong learning, we need both implicit knowledge and explicit knowledge at the same time. So it is a double-track way.

84.The Work Experience and Practice of the Crowdsourcing Workforce in China

Author:Yihong Wang 2022
Abstract:Crowdsourcing has become an international phenomenon attracting businesses and a crowd workforce across the globe. China, being one of the world’s most populous countries, has a rapidly growing digital economy that now supplies a substantial workforce to crowdsourcing platforms. However, not only is there limited research on the work experiences and practices of Chinese crowdworkers, but they generally overlook issues pertaining to an emerging type of crowd workforce known as “crowdfarm” - that of organizations taking and undertaking crowdwork as part of their formal businesses. The lack of understanding about the involved digital workforce has been identified as an obstacle to the development and application of crowdsourcing as a disruptive value creation model utilizing the resources of human intelligence.  Therefore, considerable potential exists in the Chinese crowdsourcing context for HCI and CSCW studies to contribute to the alleviation of this issue. This thesis explores the job demands, resources, crowdwork experiences and platform commitment of the general Chinese crowdworkers, compares the work experiences of crowdfarm workers and solo crowdworkers, and examines the work practice of crowdfarms as well as their interplays with solo crowdworkers, requestors, and crowdsourcing platforms. In order to explore the aforementioned, first, based on a framework of well-established approaches, namely the Job Demands-Resources model, the Work Design Questionnaire, the Oldenburg Burnout Inventory, the Utrecht Work Engagement Scale, and the Organizational Commitment Questionnaire, we systematically study the work experiences of 289 crowdworkers who work for - the most popular Chinese crowdsourcing platform. Our study examines these crowdworker experiences along four dimensions: (1) crowdsourcing job demands, (2) job resources available to the workers, (3) crowdwork experiences, and (4) platform commitment. Our results indicate significant differences across the four dimensions based on crowdworkers’ gender, education, income, job nature, and health condition. Further, they illustrate that different crowdworkers have different needs and threshold of demands and resources and that this plays a significant role in terms of moderating the crowdwork experience and platform commitment. Overall, this work part sheds light to the work experiences of the general Chinese crowdworkers and at the same time contributes to furthering understandings related to the work experiences of crowdworkers. Next, drawing on a study that involves 48 participants, our research explores, compares and contrasts the work experiences of solo crowdworkers to those of crowdfarm workers. Our ?ndings illustrate that the work experiences and context of the solo workers and crowdfarm workers are substantially different, with regards to all of the investigated seven aspects, namely (1) work environment, (2) tasks, (3) motivation and attitudes, (4) rewards, (5) reputation, (6) crowdwork satisfaction, and (7) work/life balance. This part of the work contributes to furthering the understanding of the work experiences of two different types of crowdworkers in China.  Finally,  we have extended our study of typical solo crowdworker practices to include crowdfarms. We report on interviews of people who work in 53 crowdfarms on the ZBJ platform. We describe how crowdfarms procure jobs, carry out macrotasks and microtasks, manage their reputation, and employ different management practices to motivate crowdworkers and customers. The results also reveal the crowdfarms’ interplay with solo crowdworkers, requestors and crowdsourcing platforms.  Overall, this work provides one of the first systematic investigations of the work experience and practice of digital labors in the Chinese crowdsourcing context, addressing the relevant gaps in the current literature. At the same time, by identifying and studying an emerging crowdsourcing workforce - crowdfarm - in the changing landscape of crowdsourcing in China, our work also provides a new direction and topic for researchers in the field of HCI/CSCW. We hope our work stimulates others to join in research and discussion of the potential impact of such evolution on the gig economy and the well-being of the tens of millions of people now engaged in crowdsourced work in a broader context.

85.High Efficiency Antenna Designs for Wearable Applications

Author:Rui Pei 2021
Abstract:Wearable antennas have attracted more attention due to the increasing popularity of wearable electronics over the last decade. These antennas situated on the human body, in the clothing or on daily accessories help form the wireless channel required in a Wireless Body Area Networks (WBAN). The wearable antennas designs face a series of challenges due to its working environment. Frequency shifting, efficiency degradation and radiation distortion will be induced by the human body tissue. More importantly, a level of radiation shielding into the body should be provided to meet the Specific Absorption Rate (SAR) requirement. In this study, the aim is to design a type of antennas which are suitable for on-body applications over a long period of time. The proposed antennas should have the following properties: (1) being conformal and ergonomic to avoid any discomfort; (2) minimizing the radiation into the human body for safety concerns and also ensuring a high on-body radiation efficiency; (3) utilizing reasonable materials and manufacturing process to limit the overall cost and maintaining a certain level of robustness. To achieve these properties, the belt buckle was chosen to be the platform for the proposed antenna designs. The belt buckle, with rigid metallic nature, enabled the designs to be efficient and robust. In this thesis, two types of novel belt antennas will be presented. The first one, based on a pin-buckle type design and the second one is based on a single tongue buckle. An in-house reverberation chamber is designed and installed to accurately measure the on-body radiation efficiency of the proposed antennas. Textile electromagnetic bandgap materials are studied and applied with the second belt antenna, to raise the on-body radiation efficiency of the antenna from around 40% to a level over 70%.

86.The Economic Effects of Infrastructure on the Prefecture Level in China, Evidence from Historic and Modern Data

Author:Zhe Yuan 2022
Abstract:This dissertation comprises three essays that examine the economic effects of three different infrastructure types. In this dissertation, the author aims to find the initial incentive for the decision makers to build an infrastructure. The first essay uses the fixed effects model to examine the effects of the Grand Canal and major waterways on the wheat market integration in the mid-Qing period. Applying the methodology of Donaldson (2018), it demonstrates that the wheat price in cities along all waterways, including the Grand Canal, weakly responded to local weather conditions and strongly to price fluctuations in neighbouring cities. The second essay implements a quantitative method to investigate the transport efficiency and economic efficiency of urban rail transportation (URT) systems across Chinese cities. The Data Envelopment Analysis (DEA) is employed to generate production frontiers for economic and transport outcomes, one producing transportation turnover and the other serving economic objectives. After deriving the economic and transport efficiency, the essay uses the Tobit regression to estimate the factors affecting efficiency. The analysis clearly demonstrates that the URT infrastructure is more efficient at transporting passengers in the first-tier cities in China, but does a better job at improving GDP and economic attractiveness in other cities. The evidencethus, ex-post, suggests that the primary goal of building a URT might not be same for the policy makers in different sized cities. The third chapter estimates the effects of opening new airports on employment in 19 different sectors using prefecture level data from 2003 to 2018. By using difference in difference (DID) specification, it is found that the airport openings mainly brought significant growth in two sectors, wholesale & retail and transport & warehousing in the whole prefecture region. No significant signs were found in other sectors and the total employment. These findings could be attributed to the heterogeneous dependences of each sector on air traffic. To deal with endogeneity, an instrument variable estimated by distancesto the nearest hub airport and the location of military airport is generated. The two- stage-least-square (2SLS) regression with instrument Variable (IV) suggests that the baseline model underestimate the significant effects on the wholesale &retail andtransport & warehousing sectors.

87.Mapping the catalytic active site of enoyl reductase from Mycobacterium tuberculosis polyketide synthase 5

Author:Yanni Xue 2022
Abstract:Tuberculosis infection is one of the leading causes of mortality worldwide and is caused by the bacterium Mycobacterium tuberculosis (Mtb). With the surge of multidrug-resistant Mtb strains, tuberculosis remains to be a huge global threat. Therefore, identification of potential drug targets and development of new treatments needs immediate attention. One of the strategies serving Mtb resilience is its capability to build up a thick and waxy cell wall that protects itself from antimicrobial attacks. The abundant lipids in the cell envelope of mycobacteria are attractive in drug development owing to their vital role in maintaining the structural integrity and pathogenicity of the bacterium. Polyketide synthases (PKSs) participate in the assembly of various polyketide products while enoyl reductase (ER) domain is among the least structural and biochemical characterized domain in PKSs. With this PhD work, we determined the first X-ray structure of Mtb polyketide synthase 5 enoyl reductase (PKS5-ER) in its apo-form at 2.7 Angstrom resolution. The structure displays a homodimeric arrangement around a central two-fold axis built up with a beta-sheet shared by the two molecules of the dimer. The binding site of the cofactor nicotinamide adenine dinucleotide phosphate (NADPH) is not occupied and is displaying signs of structural flexibility where the side chain of F42 is flipped 90° so sterically hindering the entrance of the cofactor. Moreover, we also established the first in vitro biochemical studies of an ER domain of the Mtb PKS family showing that PKS5-ER was capable of reducing the enoyl double bond group of butenyl-CoA and crotonyl-CoA. Although results could not confirm the expected ordered sequential mechanism, the reaction model of PKS5-ER is proposed as follows: the 2-enoyl intermediate generated by the preceding enzymatic domain of PKS5-ER (PKS5-KR) would wait for the free NADPH to come close and be positioned for the dehydrogenation process of NADPH, before the 2-enoyl intermediate is reduced. Additionally, biophysical screening results suggested that G152 and G154 are pivotal residues for PKS5-ER in the dynamics between different protein conformations. Kinetic characterization of the protein variants F42A, T127A, H147A, S148A, G151A, G152A, G154A, R177A, R193A, K240A, L263A, D264A and H317A showed them catalytically impaired with progression curves that could hardly be fitted on to a Michaelis-Menten curve. It can be concluded that any of these mutated residues are indeed involved in the catalytic mechanism of PKS5-ER to some extent but more likely that all of them play a structural role rather than a chemical role. Among them, the highly-conserved GGVGMA NADPH cofactor binding motif, especially residue G152 and G154, are indeed playing a vital role in the catalytic activity of PKS5-ER because they maximized the suppression of catalytic activity of PKS5-ER in both catalytic efficiency and binding strength aspects. Small molecule screening identified 2-phenylhydroquinone (PHQ) and hydroquinone (HQ), which were characterized displaying inhibition against PKS5-ER substrate catalysis with IC50 of 97.6 μM and 611.4 mM, respectively. A conclusion can be inferred that quinone derivatives shall display potent inhibition against PKS5-ER substrate catalysis as competitive inhibitors with respect to butenyl-CoA. A Docking of PHQ and HQ to PKS5-ER showed an estimated free energy at -5.86 kcal mol-1 and -4.25 kcal mol-1, respectively, suggesting a strong binding between PKS5-ER and these two molecules and confirming that larger molecule showed stronger inhibitor strength. Docking structure also supported the hypothesis that substrate-binding pocket of quinone derivatives in PKS5-ER was close to NADPH, in some case to the nicotinamide moiety of NADPH (PHQ). This PhD work helped to gain insights into the catalytic active site of Mtb PKS5-ER and laid the foundation for future discovery of small molecules with inhibitory capacity that could possibly be translated into therapeutic reagents.

88.Vision-based Driver Behaviour Analysis

Author:Chao YAN 2016
Abstract:With the ever-growing traffic density, the number of road accidents is anticipated to further increase. Finding solutions to reduce road accidents and to improve traffic safety has become a top-priority for many government agencies and automobile manufactures alike. It has become imperative to the development of Advance Driver Assistance Systems (ADAS) which is able to continuously monitor, not just the surrounding environment and vehicle state, but also driver behaviours. Dangerous driver behaviour including distraction and fatigue, has long been recognized as the main contributing factor in traffic accidents. This thesis mainly presents contribut- ing research on vision based driver distraction and fatigue analysis and pedestrian gait identification, which can be summarised in four parts as follows. First, the driver distraction activities including operating the shift lever, talking on a cell phone, eating, and smoking, are explored to be recognised under the framework of human action recognition. Computer vision technologies including motion history image and the pyramid histogram of oriented gradients, are applied to extracting discriminate feature for recognition. Moreover, A hierarchal classification system which considers different sets of features at different levels, is designed to improve the performance than conventional "flat" classification. Second, to solve the effectiveness problem in poor illuminations and realistic road conditions and to improve the performance, a posture based driver distraction recognition system is extended, which applies convolutional neural network (CNN) to automatically learn and predict pre-defined driving postures. The main idea is to monitor driver arm patterns with discriminative information extracted to predict distracting driver postures. Third, supposing to analysis driver fatigue and distraction through driver’s eye, mouth and ear, a commercial deep learning facial landmark locating toolbox (Face++ Research Toolkit) is evaluated in localizing the region of driver’s eye, mouth and ear and is demonstrated robust performance under the effect of illumination variation and occlusion in real driving condition. Then, semantic features for recognising different statuses of eye, mouth and ear on image patches, are learned via CNNs, which requires minimal domain knowledge of the problem.

89.Development of Low Cost CdS/CdTe Thin Film Solar Cells by Using Novel Materials

Author:Jingjin WU 2016
Abstract:cadmium Telluride (CdTe) thin film solar cells are one of the most promising solar cell technologies and share 5% of the photovoltaics market. CdTe thin film solar cells are expected to play a crucial role in the future photovoltaics market. The limitations of terawatt-scale CdTe solar cells deployment are scarcity of raw materials, low power conversion efficiency, and their stability. During the last few decades, intensive studies have been made to further understand the material properties, explore substitute materials, and get insight into the defect generation and distribution in solar cells. Yet, these problems are still not fully resolved. One of these significant topics is replacement of indium tin oxide (ITO). Following the introduction of aluminum doped zinc oxide (ZnO:Al or AZO) into thin film solar cells application, zinc oxide based transparent conducting oxides attract the attention from academic research institutes and industry. Zinc oxides are commonly doped with group III elements such as aluminium and gallium. Some researchers introduced group IV elements, including titanium, hafnium, zirconium, and obtained good properties. In our work, deposited zirconium doped zinc oxide (ZnO:Zr or ZrZO) by atomic layer deposition (ALD). Based on the advantage of precisely controlling of chemical ratio, the nature of ZrZO could be revealed. It is found that the ZrZO thin film has good thermal stability. By increasing zirconium concentration, the energy bandgap of ZrZO film follows the Burstein – Moss effect. Another issue of CdTe solar cells is the doping of CdTe thin films, low carrier concentration in CdTe thin films hinders the open circuit voltage and thus power conversion efficiency. Copper is a compelling element that is used as a CdTe dopant; however, high concentration of copper ions results in severe solar cell degradation. One approach was to evaporate a few nm thick copper on CdTe thin film followed with annealing. Another approach was to introduce a buffer layer in between the CdTe thin film and back metallic electrode. Numerous works have been shown that Sb2Te3 layer performs better than copper-based buffer layer, and the stability of carbon-based buffer layers, such as Graphene and single wall carbon nanotubes showed excellent permeability.

90.Trading Rule and Market Quality: Simulations based on Agent-based Artificial Stock Markets

Author:Xinhui Yang 2021
Abstract:The stock market is one of the most important financial markets in a country. In recent decades, many financial markets have changed their trading rules to achieve higher market quality (e.g. market liquidity, market volatility and price efficiency). This thesis focuses on three important trading rules—tick size, secondary priority rule and price limit—and tests their influence on market quality based on agent-based artificial stock markets (ASMs), which are agent-based order-driven simulated stock market models. Unlike empirical market data, ASMs ensure that the trading rule is the only exogenous variable varying between experiments. Given the lack of a consensus method for determination of the fundamental stock price in real stock markets, previous empirical studies have generally focused on market liquidity and volatility. However, as the fundamental stock price can be set in ASMs, in addition to liquidity and volatility, price efficiency can also be analysed. Therefore, in this thesis, the market quality is investigated from a more comprehensive perspective, including that of market liquidity, volatility and price efficiency. Tick size, the minimum change in stock price in stock markets, is the first trading rule to be investigated in this study. Two types of tick size system are investigated: uniform tick size and stepwise tick size systems. Under the uniform tick size system, the tick size is the same for all stocks in the market. By testing the market quality with tick size 1, 0.1, 0.01 and 0.001, the results show that smaller tick size can improve market quality, while an extremely small tick size would damage it. The price stepwise tick size system—where tick size increases with price—and volume stepwise tick size system—where tick size increases with decreasing trade volume—are then investigated. The results indicate that both price stepwise and volume stepwise systems could promote market quality in different ways. These results might be expected as the price stepwise system is mainly designed to limit noise in markets, while the volume stepwise system is used to balance the benefits for liquidity suppliers and demanders. Based on the performance of the price stepwise and volume stepwise systems, a combination stepwise tick size system is designed and investigated in this study to test whether it combines the advantages of the two systems and further improves market quality. A combination stepwise tick size system as proposed and supported by Goldstein and Kavajecz (2000), but has not been adopted in real stock markets. The tick size in a maximal combination system or minimal combination system is determined by the larger or smaller tick size in the price stepwise system and volume stepwise system, respectively. Consistent with expectation, the results indicate that a combination system, especially a minimal combination system, can further promote market quality. The secondary priority rule, which determines how the quoted order in the market is matched, is the second trading rule investigated here. The impact of various secondary priority rules, including the time priority rule, pro-rata priority rule and equal sharing priority rule, on stock market quality are investigated with consideration given to different investors’ strategies under different secondary priority rules. The time priority, first-come, first-served rule is the most common secondary priority rule in financial markets, and almost all stock markets choose it as their secondary priority rule. The pro-rata and equal sharing priority rules are generally used in other financial markets, such as futures markets. The pro-rata priority rule allocates market orders to limit orders on the best price list based proportionally on limit order sizes, while the equal sharing priority rule allocates market orders equally. Since 2017 the New York Stock Exchange has used the ‘parity’ priority rule, a combination of the time and pro-rata priority rules, which indicates that some stock markets might have realised the importance of the secondary priority rule for market quality and have tried to identify a more effective secondary priority rule than the time priority rule to promote market quality. Taking market quality under the time priority rule as the benchmark, the results show that the pro-rata priority rule can enhance trading activity and price efficiency, but can also increase volatility; the equal sharing priority rule may damage market quality with respect to market liquidity, market volatility and price efficiency. Price limit—that is, setting an established amount by which a price may increase or decrease in any single trading period—is the third trading rule to be investigated in the thesis. In financial markets with a price limit, trade is prevented from occurring outside specified price bands. The results of previous empirical studies have shown that lower limit hits are followed by price reversals, low volatility and lower/stable trade volume, while upper limit hits are followed by price continues, high volatility and higher trade volume price limit (e.g. Kim et al., 2013; Li et al., 2014). This provides evidence that the price limit is beneficial when the lower limit is hit, but harmful when the upper limit is hit. Therefore, a new policy with a lower price limit but no upper price limit (termed the asymmetric limit policy) is proposed here. The market quality under the asymmetric limit policy is tested and compared with that for a market that adopts the symmetric limit policy (with both lower and upper limits) and a market without limits. The experimental results verify the hypothesis that the asymmetric limit policy can promote market quality significantly. The reference price, which is the real-time price used to determine the price band under the price limit policy, is another focus of this study. It is found that, compared with the quoted price, the traded price is more suitable as the reference price under both asymmetric and symmetric limit policies. This finding suggests that the asymmetric price limit with trade price as the reference price might be a feasible policy for stock markets to use to promote market quality. This thesis examines the effects of changes in tick size, secondary priority rule and price limit policy on market quality, including market liquidity, market volatility and price efficiency. The results indicate the effectiveness of the minimal combination tick size system, pro-rata secondary priority rule and asymmetric price limit for promoting market quality, which has important theoretical and management implications for stock markets. Moreover, by investigating trading rules that are still at the theoretical stage, this study indicates that ASMs are an important complement for empirical studies.

91.Realization of normally-off GaN HEMTs for high voltage and low resistance applications

Author:Yutao Cai 2021
Abstract:    With the development of power electronics, the replacement of silicon by a promising candidate becomes necessary in the field of high power applications. The GaN-based devices are attractive for the high power switching applications, owing to their superior advantages of high breakdown electrical field, high carrier mobility, and fast switching speed. However, the realization of normally-off GaN-based devices for high voltage and low resistance applications is not fully accomplished. In this thesis, the simulation, fabrication, and characterization of the AlGaN/GaN MIS-HEMTs for improving high-power properties are carried out.     The TCAD simulation was first implemented to understand the effect of gate dielectric parameters and Al2O3/GaN interface states on the C–V behavior of AlGaN/GaN MIS-capacitors. After that, an economical and effective method of the 1-Octadecanethiol treatment on the GaN surface prior to the Al2O3 gate dielectric deposition proposed to improve the Al2O3/GaN interface quality. The GaN-based Metal-Insulator-Semiconductor devices treated by HCl, O2 plasma and ODT have been demonstrated. The ODT treatment is found capable of suppressing native oxide and also passivating the GaN surface effectively, hence the interface quality of the device is considerably improved. The interface traps density of Al2O3/GaN has been calculated to be around 3.0x1012 cm-2eV-1 for devices with the ODT treatment, which is a relatively low value reported using Al2O3 for the gate dielectric in GaN-based MIS devices. Moreover, there is also an improvement in the gate control characteristics of MIS-HEMTs fabricated with the ODT treatment.     In addition, a simulation of off-state breakdown voltage and electric field profiles in the MIS-HEMTs as functions of the device structures was carried out. In order to improve the high voltage performances of the devices, the AlGaN/GaN MIS-HEMTs with SiNx single-layer passivation, Al2O3/SiNx bilayer passivation, and ZrO2/SiNx bilayer passivation are investigated. High-k dielectrics are adopted as the passivation layer on MIS-HEMTs to suppress the shallow traps on the GaN surface. Besides, high-k dielectrics passivated MIS-HEMTs also show improved breakdown characteristics, and that is explained by the 2-D simulation analysis. The fabricated devices with high-k dielectrics/SiNx bilayer passivation exhibit improved power performance than the devices with plasma enhanced chemical vapor deposition-SiNx single layer passivation, including lower leakage currents, smaller current collapse, and higher breakdown voltage. The Al2O3/SiNx passivated MIS-HEMTs exhibit a breakdown voltage of 1092 V, and the dynamic Ron is only 1.14 times the static Ron after off-state VDS stress of 150 V. On the other hand, the ZrO2/SiNx passivated MIS-HEMTs exhibit a higher breakdown voltage of 1203 V, and the dynamic Ron is 1.25 times the static Ron after off-state VDS stress of 150 V.     Furthermore, in order to realize the GaN-based devices with normally-off operations, the AlGaN/GaN MIS-FET with a fully-recessed gate structure was firstly investigated. The devices exhibited a large on-state resistance, which is not desirable for high power applications. After that, a novel normally-off AlGaN/GaN MIS-HEMTs structure with a ZrOx trap charging layer is proposed. The deposition of the ZrOx charge trapping layer on the partially recessed AlGaN in conjunction with the Al2O3 gate dielectric was developed. The fabricated MIS-HEMTs presented a threshold voltage of +1.51 V and a maximum drain current density of 779 mA/mm, which accompanied a low on-resistance of 7 Ω·mm. Moreover, switching after an off-state VDS,Q stress of 200 V, the degradation of dynamic on-resistance was a low value of 1.5, indicating of a satisfactory interface between ZrOx and GaN. Furthermore, the devices exhibit a high breakdown voltage of 1447 V. Though further improvement is needed on the charges storage stability, the results indicate a significant potential of employing the ALD-ZrOx charge trapping layer to realize normally-off the GaN-based devices for high power applications.

92.The impact of margin-trading and short-selling reform on liquidity: Evidence from the Chinese stock market

Author:Shengjie Zhou 2021
Abstract:Margin-trading and short-selling activities in the Chinese stock market are unique in that only part of stocks are eligible for margin-trading and short-selling and the list of stocks that are eligible for margin-trading and short-selling changes over time. In addition, daily data on margin trading and short selling activities are available for each individual stock. Taking advantage of this market design and using daily data from March 2010 to the end of 2016, I firstly show that stocks’ eligibility on margin trading and short selling contributes to improvement in stock liquidity as measured by effective spread and Amihud’s (2002) Illiquidity Ratio. Secondly, to differentiate the impacts of margin trading and short selling, I find that margin-trading enhances liquidity while short selling impairs liquidity. In addition, I prove that the detrimental effect of short-selling on liquidity is due to it increases the adverse selection risk of the relevant stocks. Results suggest that short-sellers are informed traders as short-selling have predictive power on returns. In addition, short-selling in stocks with highest information asymmetry level tend to have the strongest negative impact on stock liquidity. Thirdly, I also demonstrate the asymmetry impacts of margin-trading and short-selling in different market conditions. At poor market conditions, stocks eligible for margin-trading and short-selling tend to have lower liquidity rather than higher liquidity. Furthermore, margin-trading activity hinders liquidity but short-selling improves liquidity. Hence, the impacts of margin trading and short selling on liquidity reversed during the market downturns. My finding helps to reconcile the discrepancy between many literature findings and regulators’ policy of short selling ban during market crisis period. I also examine the impacts of margin trading and short selling on the lead-lag relations in liquidity and return between stocks eligible for margin-trading and short selling and other stocks. Firstly, applying the Vector Autoregression (VAR) models on minute data, I find a strong lead-lag relation in both liquidity and return between eligible stocks and ineligible stocks. That is, liquidity and returns for eligible stocks lead those of the ineligible stocks. This lead-lag effect persists under different market conditions. In addition, the lead-lag effect in liquidity is stronger when investors are facing constrained funding liquidity which supports the theoretical model of Brunnermeier and Pedersen (2009) which suggests the interaction between funding liquidity and stock liquidity. Secondly, only margin trading has significant impacts on the lead-lag relations. To explain why the margin trading would have impact on lead-lag effects, I proposed three possible mechanisms (i.e., deleverage channel, cross-asset learning channel, and information diffusion channel) and use mediation analysis to test the importance of each mechanism. I found that the deleverage channel accounts for 58.24% (70.73%) of the impacts from margin trading on lead-lag effect in liquidity (return). The information diffusion channel only explains 2.28% (0.86%) of total effect that margin trading has on lead-lag effect in liquidity (return). The cross asset learning channel can explain 39.58% (28.41%) of the impacts of margin trading on lead-lag in liquidity (return). Our study provides the first empirical evidence in literature on the lead-lag relation in liquidity. In addition, it is the first paper that demonstrates the existence of return lead-lag relation at intraday level. Finally, it highlights the role that margin trading played in forming such lead lag relations in both liquidity and return.

93.Learning Density Models via Structured Latent Variables

Author:Xi Yang 2018
Abstract:As one principal approach to machine learning and cognitive science, the probabilistic framework has been continuously developed both theoretically and practically. Learning a probabilistic model can be thought of as inferring plausible models to explain observed data. The learning process exploits random variables as building blocks which are held together with probabilistic relationships. The key idea behind latent variable models is to introduce latent variables as powerful attributes (setting/instrument) to reveal data structures and explore underlying features which can sensitively describe the real-world data. The classical research approaches engage shallow architectures, including latent feature models and finite mixtures of latent variable models. Within the classical frameworks, we should make certain assumptions about the form, structure, and distribution of the data. Since the shallow form may not describe the data structures sufficiently, new types of latent structures are promptly developed with the probabilistic frameworks. In this line, three main research interests are sparked, including infinite latent feature models, mixtures of the mixture models, and deep models. This dissertation summarises our work which is advancing the state-of-the-art in both classical and emerging areas. In the first block, a finite latent variable model with the parametric priors is presented for clustering and is further extended into a two-layer mixture model for discrimination. These models embed the dimensionality reduction in their learning tasks by designing a latent structure called common loading. Referred to as the joint learning models, these models attain more appropriate low-dimensional space that better matches the learning task. Meanwhile, the parameters are optimised simultaneously for both the low-dimensional space and model learning. However, these joint learning models must assume the fixed number of features as well as mixtures, which are normally tuned and searched using a trial and error approach. In general, the simpler inference can be performed by fixing more parameters. However, the fixed parameters will limit the flexibility of models, and false assumptions could even derive incorrect inferences from the data. Thus, a richer model is allowed for reducing the number of assumptions. Therefore an infinite tri-factorisation structure is proposed with non-parametric priors in the second block.  This model can automatically determine an optimal number of features and leverage the interrelation between data and features. In the final block, we introduce how to promote the shallow latent structures model to deep structures to handle the richer structured data. This part includes two tasks: one is a layer-wise-based model, another is a deep autoencoder-based model. In a deep density model, the knowledge of cognitive agents can be modelled using more complex probability distributions. At the same time, inference and parameter computation procedure are straightforward by using a greedy layer-wise algorithm. The deep autoencoder-based joint learning model is trained in an end-to-end fashion which does not require pre-training of the autoencoder network. Also, it can be optimised by standard backpropagation without the inference of maximum a posteriori. Deep generative models are much more efficient than their shallow architectures for unsupervised and supervised density learning tasks. Furthermore, they can also be developed and used in various practical applications.  

94.Upstream network actors' operational capabilities for servitization through service offshoring: Impact on the performance of manufacturers' service offshoring contracts

Author:Zhuang Ma 2020
Abstract:Drawing on the operational capabilities perspective, this thesis aims to investigate  how upstream network actors (manufacturers’ service delivery centres & local service specialists) contribute to manufacturers’ operational capabilities through captive offshoring and offshore outsourcing contracts, and how these capabilities influence manufacturers’ service offshoring performance. To address this research aim, this thesis adopts a mixed-methods research design integrating qualitative and quantitative examinations. The qualitative study conducts 26 semi-structured interviews with senior managers in service offshoring companies to explore and identify operational capabilities contributed by manufacturers’ offshore upstream network actors. Thematic analysis to the qualitative data identifies seven operational capabilities from manufacturers’ captive offshoring and offshore outsourcing (i.e. ‘process improvement’ (‘PI’), ‘scalable service-enabling technology’ (‘SST’), ‘scalable and well-trained service talents’ (‘SWS’), ‘service and process innovation’ (‘SPI’), ‘product/service customisation’ (‘PSC’), ‘in-country relationship management’ (‘IRM’) and ‘security and IP protection protocols’ (‘SIP’)). The subsequent quantitative study proposes seven hypotheses regarding the contributions of seven operational capabilities on manufacturers’ service offshoring performance, as well as the moderating effect of service offshoring modes on these relationships. Through a large-scale survey in five cities of the Yangtze River Delta region of China, the research collects 360 sets of responses from 1734 firms involved in manufacturers’ service offshoring contracts. Hierarchical multiple regression analysis confirms that 1) all capabilities contribute to manufacturers’ service offshoring performance and 2) service offshoring mode only moderates the relationships between each of the three operational capabilities (i.e. ‘SST’, ‘SWS’ and ‘SPI’) and performance. This thesis makes four major theoretical contributions. First, it focuses on manufacturers’ offshore upstream network and discusses the uniqueness of the identified operational capabilities, which complement the downstream capabilities in the servitization literature. Second, it evaluates the importance of operational capabilities to manufacturers’ service offshoring contracts. Third, this thesis provides an alternative perspective (other than transaction costs) to explain manufacturers’ service offshoring choices, given that ‘SST’ is more important for captive offshoring (Mode 1), while ‘SWS’ and ‘SPI’ are more important for offshore outsourcing (Mode 2). Fourth, the qualitative stage of this thesis identifies in-country outsourcing as a new mode of offshoring (Mode 3) which updates our understanding of manufacturers’ service offshoring arrangements and suggests further investigation. This thesis also provides important practical implications. First, servitizing manufacturers should consider the transferability of specific operational capabilities when choosing service offshoring modes. Second, service delivery centres should work with local service specialists for operational capabilities development. Third, local service specialists should understand the capability requirements of manufacturers & service delivery centres and develop mutual trust with them. Fourth, local authorities should consider developing a comprehensive set of infrastructure and environment to attract investors from the service offshoring sector. Despite the author’s efforts, this study is subject to several limitations which require future research, such as developing objective measures for the performance of manufacturers’ service offshoring contracts, considering both upstream and downstream network actors of manufacturers’ servitization activities, and comparing onshore and offshore servitization.

95.Machine learning based trading strategies for the Chinese stock market

Author:Juan Du 2020
Abstract:This thesis focuses on the machine learning based trading strategies of China Exchange Traded Funds (ETFs). Machine learning and artificial intelligence (AI) provide an innovative level of service for financial forecasting, customer service and data security. Through the development of automated investment advisors powered by machine learning technology, financial institutions such as JPMorgan, the Bank of America and Morgan Stanley have recently achieved AI investment forecasting. This thesis intends to provide original insights into machine learning based trading strategies by producing trading signals based on forecasts of stock price movements.   Theories and models associated with algorithmic trading, price forecasting and trading signal generation are considered; in particular machine learning models such as logistic regression, support vector machine, neural network and ensemble learning methods. Each potential profitable strategy of the China ETFs is tested, and the risk-adjusted returns for corresponding strategies are analysed in detail.   The primary aim of this thesis is to develop two machine learning based trading strategies, in which machine learning models are utilised to predict trading signals. Each machine learning model and their combinations are employed to generate trading signals according to one day ahead forecasts, demonstrating that the final excess return does not cover the transaction costs. This encourages us to reduce the number of unprofitable trades in the trading system by adopting the 'multi-day forecasts' in place of the 'one day ahead forecasts'. Therefore, investors benefit from a longer prediction horizon, in which more predicted information of the total number of upward (or downward) price movements is provided. Investors can make trading decisions based on the majority of the predicted trading signals within the prediction horizon. Moreover, this method of trading rules is consistent with the industry practice. The strategy is flexible to allow risk-averse investors and risk-loving investors to make different trading decisions.   A multi-day forecast based trading system through random forest yields positive risk-adjusted returns after transaction costs. It is identified that it is possible that some machine learning techniques can successfully assist individuals in navigating their decision-making activities.

96.The long-term dynamical evolution of planetary-mass objects in star clusters

Author:Francesco Maria Flammini Dotti 2021
Abstract: The search for exo-planetary systems has seen tremendous progress in recent decades, and has resulted in astounding discoveries. From the discovery of the first confirmed exoplanet orbiting around a main sequence star in 1995, astronomers have attempted to measure and explain the characteristics of exo-planetary systems. Due to observational constraints, most of the discovered planetary systems were detected orbiting nearby field stars. To fully understand the formation and early evolution of planetary systems, it is necessary to study planetary systems in dense stellar environment, the birth places of stars. In these environments, gravitational interactions with neighbouring stars can substantially a?ect the architecture of planetary systems. Most stars, perhaps all stars in the Galaxy, formed in crowded environments. These stellar aggregates typically dissolve within ten million years, while others remain gravitationally bound for millions to billions of years in the open clusters or globular clusters that are present in our Milky Way today. It is now commonly accepted that a large fraction of stars in our Milky Way host planetary companions. To backtrack the origin and dynamical evolution of exoplanets, it is necessary to carefully study the effects of the environments in which these planetary systems spent their youth, and that of the Galactic field, open clusters, or globular clusters, in which they may spend the remaining part of their lives. In this work we analyse how different environments affect the dynamical evolution of planetary systems and free-floating planets. We analyse the effect of the star cluster environment on the evolution of planetary systems, by varying the initial stellar density of the star cluster, by studying the influence of an intermediate-mass black hole (IMBH) in the cluster centre, and by varying other star cluster properties (e.g., global rotation and virial ratio). We focus on the evolution of multi-planet planetary systems, rogue planets (i.e., planets not gravitationally bound to a star) and single-planet systems with a proto-planetary disk. We find that the star cluster environment can have a significant influence on the dynamics of planetary systems. Generally, the disruption rate of planetary systems is higher when (i) the star cluster is denser, (ii) when encountering stars have speeds comparable to the orbital speed of the planets, (iii) when the encounter is more impulsive (i.e., smaller distances between encountering stars and planets) and (iv) for encountering stars with near-parabolic trajectories. Planet- planet scattering, induced by encounters with neighbouring stars, plays a dominant role in shaping the evolution and final architecture of a multi-planet system. Disruption of planetary systems occurs more frequently in the presence of a IMBH, notably during the early phases of star cluster evolution (before the cluster fills its Roche lobe). The presence of a central IMBH enhances the ejection rate of stars and free-floating planets from the star cluster, while the presence of global rotation in the star cluster reduces the ejection rate of stars and free-floating planets from the star cluster.

97.An experimental and numerical study on the impact of wind-induced turbulence on gaseous dispersion in porous media

Author:Alireza Pourbakhtiar 2018
Abstract:This research focuses on how wind turbulence influences gas transport in the porous media. It can be useful in measuring the amount of greenhouse gasses from subsurface to atmosphere or a hazardous gas like Radon emission into buildings. It can also be important in other fields of research, anywhere that gas transports through porous media. A novel experimental arrangement is demonstrated for measuring wind turbulence- induced gas transport in dry porous media under controlled conditions. This equipment was used to measure the effect of wind turbulence on gas transport (quantified as a dispersion coefficient) as a function of distance to the surface of the porous medium exposed to wind. Two different methods for the measurement of wind-induced gas transport were compared. . In one of approaches, which is a modified version of other one, five sensors are placed inside the sample of porous material at same intervals which can measure the oxygen concentration values. Approaches are used for measuring diffusion and wind-induced dispersion. Tracer gases of O2 and CO2 with average vertical (perpendicular to the surface of porous medium) wind speeds of 0.02 to 1.06 m s-1 were applied at room temperature condition. Five different fractions of soil are utilized to find out how the particle size can affect the gas transport in a specific 2 wind condition at the surface of soil as the porous media. It is shown that gas dispersion was 20–100 times higher due to wind action. Ten wind conditions (plus calm condition with zero wind speed) are selected and three perpendicular components of wind as well as wind fluctuations are characterized. Oxygen breakthrough curves as a function of distance to the wind-exposed surface of the porous medium were analysed numerically with a finite-difference based model to assess gas transport. Potential relationships between breakthrough time and wind speed characteristics in terms of average wind speed, wind speed standard deviation and wind speed power spectrum properties in three dimensions were investigated. Statistical analyses indicated that the wind speed had a very significant impact on breakthrough time and that the characteristics for the wind speed component perpendicular to the porous medium surface were especially important. For the experiments, the penetration depth (Z50) is introduced. Linear inverse relation between penetration depth and empirical factor is determined. Wind characteristics can affect the gas transport speed and penetration depth inside porous media for particle sizes above 1mm. At particle size below 0.5 mm the effect of wind on gas transport is negligible. The relation between different wind speed characteristics such as wind speed or its power spectrumand particle shape and size on gas transport is analysed. The main component of wind which affects the gas transport was found to be the vertical one. An expression (Eq. 26) for calculating the wind-induced dispersion coefficient has been developed which is dependent on wind speed. The direct calculation of the empirical factors and wind induced dispersion coefficient of porous media at the surface is more accurate by fitting the empirical and numerical parameters.

98.Agricultural Straw Fibre Reinforced Concrete for Potential Industrial Ground-Floor Slab Application

Author:Bhooma Nepal 2020
Abstract:The primary objective of this research was to advance, through experimental research, knowledge on the use of agricultural straw fibre reinforcement in concrete. The focus is on the manufacture of straw composites, development of concrete matrix and investigation of concrete samples by various tests and standards to assess suitability for using in ground-floor slab application. Synthetic fibres such as steel and polypropylene used in construction industry are not only expensive, but carbon emissions produced during their manufacture and non-renewability of such fibres have been a big challenge in the construction industry. Due to recent trends and growth towards sustainable building materials, the focus of this research was into the use of straw fibres, which are a by-product of crops and are produced in large quantities. Straw also do not have significant economic usage and generally is disposed of by farmers often by open air burning. This practice has caused huge air pollution and deteriorates health of many people all over the world. Development of straw composites through this research not only helps to utilize the straw that are not utilized currently for any economic benefits, but also prevents unsafe disposal. This in fact leads to reduced greenhouse gas emissions by reducing open air burning, use of biodegradable locally available material and replace synthetic non-renewable fibres in construction practices. The composite fibres developed embodies a sustainable path for future researchers and fibre manufacturers towards a clean construction industry. Both rice and wheat straw fibres treated with boiled water displayed increase in tensile strength. There was increase in strength by 38% and 55% respectively as compared to its raw state. However, the tensile strength was not sufficient enough to form a stronger bond with concrete as a replacement of commercial fibres. Hence composite fibres were manufactured and tested that comprise of straw fibres mixed with different polymer compounds. Composite fibres with up to 35% straw fibre content was determined to be optimum fibre reinforcement in concrete. These composite fibres have similar tensile strength and ductility characteristics as industrially available synthetic fibres. For 1% volume fraction of concrete of straw composite fibre, the residual tensile strength was 1.88 MPa at 0.47 mm deflection of the beam and 1.33 MPa for 3.02 mm deflection. Through the successful completion of development of several series of straw polymer composite fibres, this study demonstrates that the use of straw fibres can be a viable alternative to synthetic fibres. These fibres are not only easy to manufacture and cost effective, they help to conserve energy, have higher design flexibility and reduce the emission of greenhouse gases.

99.Development of Multiphase and Multiscale Mathematical Models for Liquid Feedstock Thermally Sprayed Thermoelectric Coatings

Author:Ebrahim Gozali 2016
Abstract:The manufacture of nanostructured coatings by thermal spraying is currently a subject of increasing research efforts in order to obtain unique and often enhanced properties compared to conventional coatings. High Velocity Suspension Flame Spraying (HVSFS) has recently appeared as a potential alternative to conventional High Velocity Oxygen-Fuel (HVOF) spraying: for the processing of nanostructured spray material to achieve dense surface layers in supersonic mode with a refined structure, from which superior physical and mechanical properties are expected. The aim of this thesis is to, first, apply CFD methods to analyse the system characteristics of high speed thermal spray coating processes in order to improve the technology and advance the quality and efficiency of the HVSFS process. The second aim is to analyse heat transfer in thin films and thermoelectric thin films. The first part of this thesis aims to deepen the knowledge on such multidisciplinary process and to address current drawbacks mainly due to cooling effects and reduction of the overall performance of the spray torch. In this matter, a detailed parametric study carried out to model and analyse the premixed (propane/oxygen) and non-premixed (ethanol/oxygen) combustion reactions, the gas flow dynamics of HVSFS process, the interaction mechanism between the gas and liquid droplet including disintegration and vaporization, and finally investigation of the droplet injection point (axially, transversely, and externally), at the example of an industrial DJ2700 torch (Sulzer-Metco, Wohlen, Switzerland). The numerical results reveal that the initial mass flow rate of the liquid feedstock mainly controls the HVSFS process and the radial injection schemes are not suitable for this system. The second part of this thesis focuses on investigating the effects of solvent composition and type on the liquid droplet fragmentation and evaporation, combustion, and HVSFS gas dynamics. Here the solvent mixture is considered as a multicomponent droplet in the numerical model. The numerical results can be considered as a reference for avoiding extraneous trial and error experimentations. It can assist in adjusting spraying parameters e.g. the ratio or percentage of solvents for different powder materials, and it can give a way of visualization of the phenomena occurring during liquid spray. In the third part, effects of solid nanoparticle content on liquid feedstock trajectory in the HVSFS are investigated. Theoretical models are used to calculated thermo-physical properties of liquid feedstock. Various solid nanoparticle concentrations in suspension droplets with different diameters are selected and their effects on gas dynamics, vaporization rate and secondary break up are investigated. It is found out that small droplets with high concentrations are more stable for break up, thereby; vaporization is the dominant factor controlling the process which results in leaving some drops without fully evaporation. However, larger droplets undergo sever fragmentation inside the combustion chamber and release the nanoparticles in the middle of barrel after full evaporation. Finally a heat transfer model is developed for nanoparticles traveling inside thermal spray guns. In the absence of experimental data for Nano-scale inflight particles, the model is validated in thermoelectric thin films as candidate applications of the HVSFS process. For this purpose, one dimensional heat conduction problem in a thin film is investigated through solving three different heat conduction equations, namely: parabolic heat conduction equation (Fourier equation), hyperbolic heat conduction equation (non-Fourier heat conduction), and ballistic-diffusive heat conduction equations. A stable and convergent finite difference scheme is employed to solve the hyperbolic heat conduction (HHC) equation and the ballistic-diffusive equations (BDE). The ballistic part of the BDE is solved with the Gauss-Legendre integration scheme. Then these equations are applied across a thermoelectric thin film to investigate the steady-state and the transient cooling mechanism at the cold junction surface. The numerical results indicate that those equations predicted inaccurate results for the transient heat conduction in a thin film lead to less accurate prediction of cooling at cold side boundary, temperature, and heat flux profile in a thermoelectric film.

100.New Development on Graphene-Contacted Single Molecular Junctions

Author:Qian Zhang 2019
Abstract:Molecular electronics holds great promise to realize the ultimate miniaturization of electronic devices and the investigation of charge transport properties through molecules tethered between pairs of electrode contacts is one of the most active areas of contemporary molecular electronics. To date, metallic materials have been widely used as the electrodes to construct molecular junctions, where desired characteristics are outstanding stability, conductivity, and fabricability. However, there is an increasing realization that new single molecule electrical junction functionality can be achieved through the use of non-metallic electrodes. Fundamental studies suggest that carbon based materials have the potential to be valuable alternative electrode materials for molecular electronics in the next generation of nanostructured devices. In light of the discussion above, we symmetrically investigate the possibility of constructing non-metallic molecular junctions and the corresponding charge transport properties through such junctions by replacing the common gold electrodes with graphene electrodes. We have measured the electrical conductance of a molecular junction based on alkanedithiol/alkanediamine chains sandwiched between a gold and a graphene electrode and compared the effects of anchoring groups in graphene based junctions. We also studied the technical effects of molecule-electrode contacts by comparing methods for capturing and measuring the electrical properties of single molecules in gold?graphene contact gaps. The decay obtained by STM based I(s) and CP-AFM BJ techniques, which is much lower than the one obtained for symmetric gold junctions, is related to the weak coupling at the molecule?graphene interface and the electronic structure of graphene. This asymmetric coupling induces higher conductance for alkanediamine chains than that in the same hybrid metal?graphene molecular junction using thiol anchoring groups. Moreover, we introduce an efficient data sorting algorithm and demonstrate its capacity on real experimental data sets. As a consequence, we suggest that novel 2D materials could sever as promising electrodes to construct nonsymmetric junctions and the use of appropriate anchoring groups/techniques may lead to a much lower decay constant and more conductive molecular junctions at longer lengths.
Total 200 results found
Copyright 2006-2020 © Xi'an Jiaotong-Liverpool University 苏ICP备07016150号-1 京公网安备 11010102002019号