Find Theses

  • All
  • Scholar Profiles
  • Research Units
  • Research Output
  • Theses

81.Morphodynamics of Fence-dune Systems

Author:Qingqian Ning 2021
Abstract:Aeolian processes remove nutrients from arable land, pose potential threats to human respiratory systems, and deteriorate infrastructure and natural habitats. Fence have widely used to control the aeolian processes in two ways: to reduce the wind speed so to mitigate the erosion, or to initiate dunes to protect area of interest. The wind reduction effect of fence has been intensively studied, while the sand trapping effect of the fence needs to be further investigated. A new laser sheet sensor to measure the instantaneous aeolian flux. In laboratory calibration it's found that the laser sheet sensor has a short response time, good consistency, and a higher saturation limit. The laser sheet sensor should be calibrated on site and the field performance is not as expected. The efficiency of fence in wind speed reduction and sand trapping is determined by many parameters of the fence, including fence height, length, width, opening size, porosity, and opening distribution. The impact of fence height on the sand trapping capacity is important but not investigated. Fence with three different heights were deployed in the field and the dune development parameters were record. The results showed that there are two stages during the development. In stage I, the dune grew vertically and in stage II, the dune expanded horizontally. Also, the maximum dune height is proportional to the fence height. Porosity is a critical parameter in the fence design while the configuration of the fence with same porosity was not investigated. Field experiment was conducted on a beach with eight different fence configurations. A Terrain laser scanner was applied to measure the dune morphology during the dune development. Result show that the fence configuration influence not only where the embryo dune emerged, but also how the final dune would appear. The results of the 3-D dune morphology showed that a length-height ratio of 10 or over is adequate to keep the lateral edge effect negligible at the central profile. Also, it's found that the smaller the opening size, the higher the merging point of the windward dune and the leeward dune. In order to investigate the interactions in the wind-sand-fence-dune system, the 2-dimensional wind field should be investigated. However, since the cost to deploy multiple towers of anemometers is too high, a wind tunnel measurement was taken on the fence/dune model with model-prototype ratio of 1:10. The results showed that the wind reduction effect of all the fence can be neglected at the height of two times fence height, while wind reduction effect from the surface to two times fence height varies significantly. The upper denser fence was the only fence with a considerable region with severely reduced wind speed. The wind field at the windward side was similar and that of the leeward varied significantly. Moreover, the wind-sand-fence-dune system was in a negative feedback response loop until the system reached a dynamic equilibrium. It's found that the wind near the dune profile of the upper denser fence and lower denser fence was above 3 m/s. It seems that equilibrium was reached for those two fences. For other dunes of the upper denser fence and lower denser fence, there existed regions with wind velocity at 2 m/s or lower, in which the particles are prone to deposit and the dune surface are prone to develop further.

82.The mechanisms to regulate arsenic behaviors in redox transition zones in paddy soils

Author:Zhaofeng Yuan 2020
Abstract:Rice (Oryza sativa L.) is the staple food for people especially in Asia, but rice production is threatened by arsenic (As) contamination in paddy soil. Contamination of As in paddy soil is mainly caused by anthropogenic activities, such as mining and irrigation of high As groundwater. External As firstly enters overlying water, and then accumulates in paddy soil. Soil-water interface (SWI) is the gate controlling As exchange between soil and overlying water, and rhizosphere is the inlet of As from soils into rice root. Under natural conditions, a redox transition occurs along both micro interfaces due to atmospheric O2 diffusion or radial O2 loss from root. Arsenic is sensitive to redox conditions and tends to change over space and time across those micro interfaces. However, a deep understanding of As cycling in paddy water-soil-rice system has been hindered to date by techniques available to sample micro interfaces repeatedly in high-resolution. In order to fill this gap, a novel high-resolution porewater sampler was developed in this study. Using the technique, the spatiotemporal control of As was studied at paddy SWI and rhizosphere. A hollow fiber membrane tube (~ 2 mm diameter) was evaluated to sample dissolved elements with passive diffusion mechanism. The results showed quantification of solutes surrounding the tube can be achieved in every ≥ 24 h regardless of pH, ionic strength, and dissolved organic matter conditions. This technique, called In-situ Porewater Iterative (IPI) sampler, was further validated in soils under an anoxic-oxic transition by bubbling N2 and air into overlying water. The results showed that the IPI sampler is a powerful and robust technique in monitoring dynamics of element profile in soil porewater in high-resolution (mm). Moreover, measurement methods in ICP-MS and IC-ICP-MS were optimized to promote the measurement throughput of multi-element in limited samples (μL level) collected by high-resolution porewater samplers (e.g. IPI samplers). Major elements (e.g. iron (Fe) and manganese (Mn), mg·L-1 level) were measured by ICP-MS in extended dynamic range mode to avoid signal overflow, while trace elements (e.g. As, μg·L-1 level) in dynamic reaction cell (O2) mode to alleviate potential polyatomic interferences. Ammonium bicarbonate mobile phase was further demonstrated to simultaneously measure common species of As, phosphorus (P) and sulfur (S) with IC-ICP-MS analysis. With the optimized analytical methods and IPI samplers, the measurement throughput of multi-element and their species were improved up to 10 times compared to traditional methods. Furthermore, the cycling of As across SWI and rhizosphere was studied with the updated IPI sampler and state-of-art analytical techniques. In SWI, profiles of As, Fe and other associated elements in five paddy soils were mapped. The results showed a close coupling of Fe, Mn, As and P in 4 out of 5 paddy soils. However, decoupling of Fe, Mn and As was observed in the oxic-anoxic transition zone of one paddy soil. The study provided in situ evidence showing decoupling of As with Fe and Mn may happen in the oxic-anoxic transition zone of SWI. For rhizosphere, dynamic profiles of Fe and As were mapped by IPI samplers from days after transplanting 0 to 40. The results showed Fe and As change spatiotemporally in rhizosphere. Interestingly, Fe oxides formed in rhizospheric soil, rather than on rice root (Fe plaque), play the key role for immobilizing mobile As from bulk soil. A model of As transport from soil to rice, linking the temporal and spatial regulation of As in paddy soils, was provided to help better understand As cycling in paddy soils.

83.Extensible and Explainable Lifelong Machine Learning Architecture for Double-Track Fine-Grained Sentiment Analysis

Author:Xianbin Hong 2022
Abstract:Lifelong machine learning aims to accumulate knowledge in a lifetime and use its knowledge to solve various tasks. It keeps knowledge while solving problems to improve its ability to handle different tasks. Past knowledge contributes to new task solving, and old tasks also benefit from new knowledge. Lifelong machine learning is a great vision for artificial intelligence rather than a specific algorithm. Any method could be a part of lifelong learning if it can contribute knowledge to solve other tasks or leverage outside knowledge to conduct a new task. Many famous learning paradigms like “transfer learning”, “learn to learn” etc., are components of lifelong learning. Although researchers have worked decades for lifelong learning, there still are many gaps. To advance lifelong learning, this thesis aims to discuss the scalability and knowledge validation issues. To make readers have a direct understanding of lifelong learning, the author chooses fine-grained sentiment analysis as an example for discussion. Chapter 1 will introduce the motivation and research questions in detail. Then Chapter 2 reviews the history of lifelong and its definition to give readers a better understanding. This chapter also raises some issues of lifelong learning. Chapter 3 will introduce a deep neural network-based lifelong learning approach for Amazon product review sentiment classification. Compared with the non-lifelong deep learning method, the lifelong learning approach improves the F1 score of sentiment classification on the negative class from 67.78% to 78.84%. Its time complexity reduces to O(n), which is a significant improvement compared to the O(n2) complexity of the previous research. This chapter proposes to leverage Knowledge distillation to reduce the model size in real-time tasks. The lifelong learning architecture shows good scalability and can handle 10,000 tasks in real-time at an affordable cost. This architecture also considered knowledge validation to achieve better performance. This thesis categorizes knowledge in lifelong learning in two forms: implicit knowledge and explicit knowledge. Implicit knowledge is that humans cannot directly understand, like the parameters of deep neural networks in chapter 3. On the contrary, people prefer explicit knowledge because it is more explainable. So chapter 4 use fine-grained sentiment analysis as an example to discuss how to obtain and maintain explicit knowledge for lifelong learning. The fine-grained sentiment analysis needs the knowledge of product features and people’s attitudes to these features. It is much more complex than the sentiment classification task in chapter 3. Lacking knowledge, the deep learning approach’s classification accuracy is only 47% on the Twitter product review test dataset. The author uses entity recognition to detect the product feature and use reinforcement learning to learn people’s attitudes toward each feature to solve the problem. The reinforcement learning approach can also monitor the change of knowledge and evaluate the reliability of knowledge. This explicit knowledge-based approach can explain why the model provides a prediction and whether the decision is reliable. It can provide consumers with a statistic report to show how people feel about each feature. Different customers have different demands, so they need to know whether each feature of the product satisfies their demand before purchase. Knowledge validation also can tell the researchers whether the knowledge is reliable to use. With the help of explicit knowledge, the classification accuracy on the Twitter product review test dataset rises from 47% to 72%, which is a significant improvement. As explicit knowledge has better explainability, the author expects to use explicit knowledge as much as possible. However, the collecting of explicit knowledge needs time, so implicit knowledge still is necessary and valuable in practice. In the way of lifelong learning, we need both implicit knowledge and explicit knowledge at the same time. So it is a double-track way.

84.The Work Experience and Practice of the Crowdsourcing Workforce in China

Author:Yihong Wang 2022
Abstract:Crowdsourcing has become an international phenomenon attracting businesses and a crowd workforce across the globe. China, being one of the world’s most populous countries, has a rapidly growing digital economy that now supplies a substantial workforce to crowdsourcing platforms. However, not only is there limited research on the work experiences and practices of Chinese crowdworkers, but they generally overlook issues pertaining to an emerging type of crowd workforce known as “crowdfarm” - that of organizations taking and undertaking crowdwork as part of their formal businesses. The lack of understanding about the involved digital workforce has been identified as an obstacle to the development and application of crowdsourcing as a disruptive value creation model utilizing the resources of human intelligence.  Therefore, considerable potential exists in the Chinese crowdsourcing context for HCI and CSCW studies to contribute to the alleviation of this issue. This thesis explores the job demands, resources, crowdwork experiences and platform commitment of the general Chinese crowdworkers, compares the work experiences of crowdfarm workers and solo crowdworkers, and examines the work practice of crowdfarms as well as their interplays with solo crowdworkers, requestors, and crowdsourcing platforms. In order to explore the aforementioned, first, based on a framework of well-established approaches, namely the Job Demands-Resources model, the Work Design Questionnaire, the Oldenburg Burnout Inventory, the Utrecht Work Engagement Scale, and the Organizational Commitment Questionnaire, we systematically study the work experiences of 289 crowdworkers who work for - the most popular Chinese crowdsourcing platform. Our study examines these crowdworker experiences along four dimensions: (1) crowdsourcing job demands, (2) job resources available to the workers, (3) crowdwork experiences, and (4) platform commitment. Our results indicate significant differences across the four dimensions based on crowdworkers’ gender, education, income, job nature, and health condition. Further, they illustrate that different crowdworkers have different needs and threshold of demands and resources and that this plays a significant role in terms of moderating the crowdwork experience and platform commitment. Overall, this work part sheds light to the work experiences of the general Chinese crowdworkers and at the same time contributes to furthering understandings related to the work experiences of crowdworkers. Next, drawing on a study that involves 48 participants, our research explores, compares and contrasts the work experiences of solo crowdworkers to those of crowdfarm workers. Our ?ndings illustrate that the work experiences and context of the solo workers and crowdfarm workers are substantially different, with regards to all of the investigated seven aspects, namely (1) work environment, (2) tasks, (3) motivation and attitudes, (4) rewards, (5) reputation, (6) crowdwork satisfaction, and (7) work/life balance. This part of the work contributes to furthering the understanding of the work experiences of two different types of crowdworkers in China.  Finally,  we have extended our study of typical solo crowdworker practices to include crowdfarms. We report on interviews of people who work in 53 crowdfarms on the ZBJ platform. We describe how crowdfarms procure jobs, carry out macrotasks and microtasks, manage their reputation, and employ different management practices to motivate crowdworkers and customers. The results also reveal the crowdfarms’ interplay with solo crowdworkers, requestors and crowdsourcing platforms.  Overall, this work provides one of the first systematic investigations of the work experience and practice of digital labors in the Chinese crowdsourcing context, addressing the relevant gaps in the current literature. At the same time, by identifying and studying an emerging crowdsourcing workforce - crowdfarm - in the changing landscape of crowdsourcing in China, our work also provides a new direction and topic for researchers in the field of HCI/CSCW. We hope our work stimulates others to join in research and discussion of the potential impact of such evolution on the gig economy and the well-being of the tens of millions of people now engaged in crowdsourced work in a broader context.

85.High Efficiency Antenna Designs for Wearable Applications

Author:Rui Pei 2021
Abstract:Wearable antennas have attracted more attention due to the increasing popularity of wearable electronics over the last decade. These antennas situated on the human body, in the clothing or on daily accessories help form the wireless channel required in a Wireless Body Area Networks (WBAN). The wearable antennas designs face a series of challenges due to its working environment. Frequency shifting, efficiency degradation and radiation distortion will be induced by the human body tissue. More importantly, a level of radiation shielding into the body should be provided to meet the Specific Absorption Rate (SAR) requirement. In this study, the aim is to design a type of antennas which are suitable for on-body applications over a long period of time. The proposed antennas should have the following properties: (1) being conformal and ergonomic to avoid any discomfort; (2) minimizing the radiation into the human body for safety concerns and also ensuring a high on-body radiation efficiency; (3) utilizing reasonable materials and manufacturing process to limit the overall cost and maintaining a certain level of robustness. To achieve these properties, the belt buckle was chosen to be the platform for the proposed antenna designs. The belt buckle, with rigid metallic nature, enabled the designs to be efficient and robust. In this thesis, two types of novel belt antennas will be presented. The first one, based on a pin-buckle type design and the second one is based on a single tongue buckle. An in-house reverberation chamber is designed and installed to accurately measure the on-body radiation efficiency of the proposed antennas. Textile electromagnetic bandgap materials are studied and applied with the second belt antenna, to raise the on-body radiation efficiency of the antenna from around 40% to a level over 70%.

86.The Economic Effects of Infrastructure on the Prefecture Level in China, Evidence from Historic and Modern Data

Author:Zhe Yuan 2022
Abstract:This dissertation comprises three essays that examine the economic effects of three different infrastructure types. In this dissertation, the author aims to find the initial incentive for the decision makers to build an infrastructure. The first essay uses the fixed effects model to examine the effects of the Grand Canal and major waterways on the wheat market integration in the mid-Qing period. Applying the methodology of Donaldson (2018), it demonstrates that the wheat price in cities along all waterways, including the Grand Canal, weakly responded to local weather conditions and strongly to price fluctuations in neighbouring cities. The second essay implements a quantitative method to investigate the transport efficiency and economic efficiency of urban rail transportation (URT) systems across Chinese cities. The Data Envelopment Analysis (DEA) is employed to generate production frontiers for economic and transport outcomes, one producing transportation turnover and the other serving economic objectives. After deriving the economic and transport efficiency, the essay uses the Tobit regression to estimate the factors affecting efficiency. The analysis clearly demonstrates that the URT infrastructure is more efficient at transporting passengers in the first-tier cities in China, but does a better job at improving GDP and economic attractiveness in other cities. The evidencethus, ex-post, suggests that the primary goal of building a URT might not be same for the policy makers in different sized cities. The third chapter estimates the effects of opening new airports on employment in 19 different sectors using prefecture level data from 2003 to 2018. By using difference in difference (DID) specification, it is found that the airport openings mainly brought significant growth in two sectors, wholesale & retail and transport & warehousing in the whole prefecture region. No significant signs were found in other sectors and the total employment. These findings could be attributed to the heterogeneous dependences of each sector on air traffic. To deal with endogeneity, an instrument variable estimated by distancesto the nearest hub airport and the location of military airport is generated. The two- stage-least-square (2SLS) regression with instrument Variable (IV) suggests that the baseline model underestimate the significant effects on the wholesale &retail andtransport & warehousing sectors.

87.Mapping the catalytic active site of enoyl reductase from Mycobacterium tuberculosis polyketide synthase 5

Author:Yanni Xue 2022
Abstract:Tuberculosis infection is one of the leading causes of mortality worldwide and is caused by the bacterium Mycobacterium tuberculosis (Mtb). With the surge of multidrug-resistant Mtb strains, tuberculosis remains to be a huge global threat. Therefore, identification of potential drug targets and development of new treatments needs immediate attention. One of the strategies serving Mtb resilience is its capability to build up a thick and waxy cell wall that protects itself from antimicrobial attacks. The abundant lipids in the cell envelope of mycobacteria are attractive in drug development owing to their vital role in maintaining the structural integrity and pathogenicity of the bacterium. Polyketide synthases (PKSs) participate in the assembly of various polyketide products while enoyl reductase (ER) domain is among the least structural and biochemical characterized domain in PKSs. With this PhD work, we determined the first X-ray structure of Mtb polyketide synthase 5 enoyl reductase (PKS5-ER) in its apo-form at 2.7 Angstrom resolution. The structure displays a homodimeric arrangement around a central two-fold axis built up with a beta-sheet shared by the two molecules of the dimer. The binding site of the cofactor nicotinamide adenine dinucleotide phosphate (NADPH) is not occupied and is displaying signs of structural flexibility where the side chain of F42 is flipped 90° so sterically hindering the entrance of the cofactor. Moreover, we also established the first in vitro biochemical studies of an ER domain of the Mtb PKS family showing that PKS5-ER was capable of reducing the enoyl double bond group of butenyl-CoA and crotonyl-CoA. Although results could not confirm the expected ordered sequential mechanism, the reaction model of PKS5-ER is proposed as follows: the 2-enoyl intermediate generated by the preceding enzymatic domain of PKS5-ER (PKS5-KR) would wait for the free NADPH to come close and be positioned for the dehydrogenation process of NADPH, before the 2-enoyl intermediate is reduced. Additionally, biophysical screening results suggested that G152 and G154 are pivotal residues for PKS5-ER in the dynamics between different protein conformations. Kinetic characterization of the protein variants F42A, T127A, H147A, S148A, G151A, G152A, G154A, R177A, R193A, K240A, L263A, D264A and H317A showed them catalytically impaired with progression curves that could hardly be fitted on to a Michaelis-Menten curve. It can be concluded that any of these mutated residues are indeed involved in the catalytic mechanism of PKS5-ER to some extent but more likely that all of them play a structural role rather than a chemical role. Among them, the highly-conserved GGVGMA NADPH cofactor binding motif, especially residue G152 and G154, are indeed playing a vital role in the catalytic activity of PKS5-ER because they maximized the suppression of catalytic activity of PKS5-ER in both catalytic efficiency and binding strength aspects. Small molecule screening identified 2-phenylhydroquinone (PHQ) and hydroquinone (HQ), which were characterized displaying inhibition against PKS5-ER substrate catalysis with IC50 of 97.6 μM and 611.4 mM, respectively. A conclusion can be inferred that quinone derivatives shall display potent inhibition against PKS5-ER substrate catalysis as competitive inhibitors with respect to butenyl-CoA. A Docking of PHQ and HQ to PKS5-ER showed an estimated free energy at -5.86 kcal mol-1 and -4.25 kcal mol-1, respectively, suggesting a strong binding between PKS5-ER and these two molecules and confirming that larger molecule showed stronger inhibitor strength. Docking structure also supported the hypothesis that substrate-binding pocket of quinone derivatives in PKS5-ER was close to NADPH, in some case to the nicotinamide moiety of NADPH (PHQ). This PhD work helped to gain insights into the catalytic active site of Mtb PKS5-ER and laid the foundation for future discovery of small molecules with inhibitory capacity that could possibly be translated into therapeutic reagents.

88.Vision-based Driver Behaviour Analysis

Author:Chao YAN 2016
Abstract:With the ever-growing traffic density, the number of road accidents is anticipated to further increase. Finding solutions to reduce road accidents and to improve traffic safety has become a top-priority for many government agencies and automobile manufactures alike. It has become imperative to the development of Advance Driver Assistance Systems (ADAS) which is able to continuously monitor, not just the surrounding environment and vehicle state, but also driver behaviours. Dangerous driver behaviour including distraction and fatigue, has long been recognized as the main contributing factor in traffic accidents. This thesis mainly presents contribut- ing research on vision based driver distraction and fatigue analysis and pedestrian gait identification, which can be summarised in four parts as follows. First, the driver distraction activities including operating the shift lever, talking on a cell phone, eating, and smoking, are explored to be recognised under the framework of human action recognition. Computer vision technologies including motion history image and the pyramid histogram of oriented gradients, are applied to extracting discriminate feature for recognition. Moreover, A hierarchal classification system which considers different sets of features at different levels, is designed to improve the performance than conventional "flat" classification. Second, to solve the effectiveness problem in poor illuminations and realistic road conditions and to improve the performance, a posture based driver distraction recognition system is extended, which applies convolutional neural network (CNN) to automatically learn and predict pre-defined driving postures. The main idea is to monitor driver arm patterns with discriminative information extracted to predict distracting driver postures. Third, supposing to analysis driver fatigue and distraction through driver’s eye, mouth and ear, a commercial deep learning facial landmark locating toolbox (Face++ Research Toolkit) is evaluated in localizing the region of driver’s eye, mouth and ear and is demonstrated robust performance under the effect of illumination variation and occlusion in real driving condition. Then, semantic features for recognising different statuses of eye, mouth and ear on image patches, are learned via CNNs, which requires minimal domain knowledge of the problem.

89.Development of Low Cost CdS/CdTe Thin Film Solar Cells by Using Novel Materials

Author:Jingjin WU 2016
Abstract:cadmium Telluride (CdTe) thin film solar cells are one of the most promising solar cell technologies and share 5% of the photovoltaics market. CdTe thin film solar cells are expected to play a crucial role in the future photovoltaics market. The limitations of terawatt-scale CdTe solar cells deployment are scarcity of raw materials, low power conversion efficiency, and their stability. During the last few decades, intensive studies have been made to further understand the material properties, explore substitute materials, and get insight into the defect generation and distribution in solar cells. Yet, these problems are still not fully resolved. One of these significant topics is replacement of indium tin oxide (ITO). Following the introduction of aluminum doped zinc oxide (ZnO:Al or AZO) into thin film solar cells application, zinc oxide based transparent conducting oxides attract the attention from academic research institutes and industry. Zinc oxides are commonly doped with group III elements such as aluminium and gallium. Some researchers introduced group IV elements, including titanium, hafnium, zirconium, and obtained good properties. In our work, deposited zirconium doped zinc oxide (ZnO:Zr or ZrZO) by atomic layer deposition (ALD). Based on the advantage of precisely controlling of chemical ratio, the nature of ZrZO could be revealed. It is found that the ZrZO thin film has good thermal stability. By increasing zirconium concentration, the energy bandgap of ZrZO film follows the Burstein – Moss effect. Another issue of CdTe solar cells is the doping of CdTe thin films, low carrier concentration in CdTe thin films hinders the open circuit voltage and thus power conversion efficiency. Copper is a compelling element that is used as a CdTe dopant; however, high concentration of copper ions results in severe solar cell degradation. One approach was to evaporate a few nm thick copper on CdTe thin film followed with annealing. Another approach was to introduce a buffer layer in between the CdTe thin film and back metallic electrode. Numerous works have been shown that Sb2Te3 layer performs better than copper-based buffer layer, and the stability of carbon-based buffer layers, such as Graphene and single wall carbon nanotubes showed excellent permeability.

90.Trading Rule and Market Quality: Simulations based on Agent-based Artificial Stock Markets

Author:Xinhui Yang 2021
Abstract:The stock market is one of the most important financial markets in a country. In recent decades, many financial markets have changed their trading rules to achieve higher market quality (e.g. market liquidity, market volatility and price efficiency). This thesis focuses on three important trading rules—tick size, secondary priority rule and price limit—and tests their influence on market quality based on agent-based artificial stock markets (ASMs), which are agent-based order-driven simulated stock market models. Unlike empirical market data, ASMs ensure that the trading rule is the only exogenous variable varying between experiments. Given the lack of a consensus method for determination of the fundamental stock price in real stock markets, previous empirical studies have generally focused on market liquidity and volatility. However, as the fundamental stock price can be set in ASMs, in addition to liquidity and volatility, price efficiency can also be analysed. Therefore, in this thesis, the market quality is investigated from a more comprehensive perspective, including that of market liquidity, volatility and price efficiency. Tick size, the minimum change in stock price in stock markets, is the first trading rule to be investigated in this study. Two types of tick size system are investigated: uniform tick size and stepwise tick size systems. Under the uniform tick size system, the tick size is the same for all stocks in the market. By testing the market quality with tick size 1, 0.1, 0.01 and 0.001, the results show that smaller tick size can improve market quality, while an extremely small tick size would damage it. The price stepwise tick size system—where tick size increases with price—and volume stepwise tick size system—where tick size increases with decreasing trade volume—are then investigated. The results indicate that both price stepwise and volume stepwise systems could promote market quality in different ways. These results might be expected as the price stepwise system is mainly designed to limit noise in markets, while the volume stepwise system is used to balance the benefits for liquidity suppliers and demanders. Based on the performance of the price stepwise and volume stepwise systems, a combination stepwise tick size system is designed and investigated in this study to test whether it combines the advantages of the two systems and further improves market quality. A combination stepwise tick size system as proposed and supported by Goldstein and Kavajecz (2000), but has not been adopted in real stock markets. The tick size in a maximal combination system or minimal combination system is determined by the larger or smaller tick size in the price stepwise system and volume stepwise system, respectively. Consistent with expectation, the results indicate that a combination system, especially a minimal combination system, can further promote market quality. The secondary priority rule, which determines how the quoted order in the market is matched, is the second trading rule investigated here. The impact of various secondary priority rules, including the time priority rule, pro-rata priority rule and equal sharing priority rule, on stock market quality are investigated with consideration given to different investors’ strategies under different secondary priority rules. The time priority, first-come, first-served rule is the most common secondary priority rule in financial markets, and almost all stock markets choose it as their secondary priority rule. The pro-rata and equal sharing priority rules are generally used in other financial markets, such as futures markets. The pro-rata priority rule allocates market orders to limit orders on the best price list based proportionally on limit order sizes, while the equal sharing priority rule allocates market orders equally. Since 2017 the New York Stock Exchange has used the ‘parity’ priority rule, a combination of the time and pro-rata priority rules, which indicates that some stock markets might have realised the importance of the secondary priority rule for market quality and have tried to identify a more effective secondary priority rule than the time priority rule to promote market quality. Taking market quality under the time priority rule as the benchmark, the results show that the pro-rata priority rule can enhance trading activity and price efficiency, but can also increase volatility; the equal sharing priority rule may damage market quality with respect to market liquidity, market volatility and price efficiency. Price limit—that is, setting an established amount by which a price may increase or decrease in any single trading period—is the third trading rule to be investigated in the thesis. In financial markets with a price limit, trade is prevented from occurring outside specified price bands. The results of previous empirical studies have shown that lower limit hits are followed by price reversals, low volatility and lower/stable trade volume, while upper limit hits are followed by price continues, high volatility and higher trade volume price limit (e.g. Kim et al., 2013; Li et al., 2014). This provides evidence that the price limit is beneficial when the lower limit is hit, but harmful when the upper limit is hit. Therefore, a new policy with a lower price limit but no upper price limit (termed the asymmetric limit policy) is proposed here. The market quality under the asymmetric limit policy is tested and compared with that for a market that adopts the symmetric limit policy (with both lower and upper limits) and a market without limits. The experimental results verify the hypothesis that the asymmetric limit policy can promote market quality significantly. The reference price, which is the real-time price used to determine the price band under the price limit policy, is another focus of this study. It is found that, compared with the quoted price, the traded price is more suitable as the reference price under both asymmetric and symmetric limit policies. This finding suggests that the asymmetric price limit with trade price as the reference price might be a feasible policy for stock markets to use to promote market quality. This thesis examines the effects of changes in tick size, secondary priority rule and price limit policy on market quality, including market liquidity, market volatility and price efficiency. The results indicate the effectiveness of the minimal combination tick size system, pro-rata secondary priority rule and asymmetric price limit for promoting market quality, which has important theoretical and management implications for stock markets. Moreover, by investigating trading rules that are still at the theoretical stage, this study indicates that ASMs are an important complement for empirical studies.

91.Realization of normally-off GaN HEMTs for high voltage and low resistance applications

Author:Yutao Cai 2021
Abstract:    With the development of power electronics, the replacement of silicon by a promising candidate becomes necessary in the field of high power applications. The GaN-based devices are attractive for the high power switching applications, owing to their superior advantages of high breakdown electrical field, high carrier mobility, and fast switching speed. However, the realization of normally-off GaN-based devices for high voltage and low resistance applications is not fully accomplished. In this thesis, the simulation, fabrication, and characterization of the AlGaN/GaN MIS-HEMTs for improving high-power properties are carried out.     The TCAD simulation was first implemented to understand the effect of gate dielectric parameters and Al2O3/GaN interface states on the C–V behavior of AlGaN/GaN MIS-capacitors. After that, an economical and effective method of the 1-Octadecanethiol treatment on the GaN surface prior to the Al2O3 gate dielectric deposition proposed to improve the Al2O3/GaN interface quality. The GaN-based Metal-Insulator-Semiconductor devices treated by HCl, O2 plasma and ODT have been demonstrated. The ODT treatment is found capable of suppressing native oxide and also passivating the GaN surface effectively, hence the interface quality of the device is considerably improved. The interface traps density of Al2O3/GaN has been calculated to be around 3.0x1012 cm-2eV-1 for devices with the ODT treatment, which is a relatively low value reported using Al2O3 for the gate dielectric in GaN-based MIS devices. Moreover, there is also an improvement in the gate control characteristics of MIS-HEMTs fabricated with the ODT treatment.     In addition, a simulation of off-state breakdown voltage and electric field profiles in the MIS-HEMTs as functions of the device structures was carried out. In order to improve the high voltage performances of the devices, the AlGaN/GaN MIS-HEMTs with SiNx single-layer passivation, Al2O3/SiNx bilayer passivation, and ZrO2/SiNx bilayer passivation are investigated. High-k dielectrics are adopted as the passivation layer on MIS-HEMTs to suppress the shallow traps on the GaN surface. Besides, high-k dielectrics passivated MIS-HEMTs also show improved breakdown characteristics, and that is explained by the 2-D simulation analysis. The fabricated devices with high-k dielectrics/SiNx bilayer passivation exhibit improved power performance than the devices with plasma enhanced chemical vapor deposition-SiNx single layer passivation, including lower leakage currents, smaller current collapse, and higher breakdown voltage. The Al2O3/SiNx passivated MIS-HEMTs exhibit a breakdown voltage of 1092 V, and the dynamic Ron is only 1.14 times the static Ron after off-state VDS stress of 150 V. On the other hand, the ZrO2/SiNx passivated MIS-HEMTs exhibit a higher breakdown voltage of 1203 V, and the dynamic Ron is 1.25 times the static Ron after off-state VDS stress of 150 V.     Furthermore, in order to realize the GaN-based devices with normally-off operations, the AlGaN/GaN MIS-FET with a fully-recessed gate structure was firstly investigated. The devices exhibited a large on-state resistance, which is not desirable for high power applications. After that, a novel normally-off AlGaN/GaN MIS-HEMTs structure with a ZrOx trap charging layer is proposed. The deposition of the ZrOx charge trapping layer on the partially recessed AlGaN in conjunction with the Al2O3 gate dielectric was developed. The fabricated MIS-HEMTs presented a threshold voltage of +1.51 V and a maximum drain current density of 779 mA/mm, which accompanied a low on-resistance of 7 Ω·mm. Moreover, switching after an off-state VDS,Q stress of 200 V, the degradation of dynamic on-resistance was a low value of 1.5, indicating of a satisfactory interface between ZrOx and GaN. Furthermore, the devices exhibit a high breakdown voltage of 1447 V. Though further improvement is needed on the charges storage stability, the results indicate a significant potential of employing the ALD-ZrOx charge trapping layer to realize normally-off the GaN-based devices for high power applications.

92.The impact of margin-trading and short-selling reform on liquidity: Evidence from the Chinese stock market

Author:Shengjie Zhou 2021
Abstract:Margin-trading and short-selling activities in the Chinese stock market are unique in that only part of stocks are eligible for margin-trading and short-selling and the list of stocks that are eligible for margin-trading and short-selling changes over time. In addition, daily data on margin trading and short selling activities are available for each individual stock. Taking advantage of this market design and using daily data from March 2010 to the end of 2016, I firstly show that stocks’ eligibility on margin trading and short selling contributes to improvement in stock liquidity as measured by effective spread and Amihud’s (2002) Illiquidity Ratio. Secondly, to differentiate the impacts of margin trading and short selling, I find that margin-trading enhances liquidity while short selling impairs liquidity. In addition, I prove that the detrimental effect of short-selling on liquidity is due to it increases the adverse selection risk of the relevant stocks. Results suggest that short-sellers are informed traders as short-selling have predictive power on returns. In addition, short-selling in stocks with highest information asymmetry level tend to have the strongest negative impact on stock liquidity. Thirdly, I also demonstrate the asymmetry impacts of margin-trading and short-selling in different market conditions. At poor market conditions, stocks eligible for margin-trading and short-selling tend to have lower liquidity rather than higher liquidity. Furthermore, margin-trading activity hinders liquidity but short-selling improves liquidity. Hence, the impacts of margin trading and short selling on liquidity reversed during the market downturns. My finding helps to reconcile the discrepancy between many literature findings and regulators’ policy of short selling ban during market crisis period. I also examine the impacts of margin trading and short selling on the lead-lag relations in liquidity and return between stocks eligible for margin-trading and short selling and other stocks. Firstly, applying the Vector Autoregression (VAR) models on minute data, I find a strong lead-lag relation in both liquidity and return between eligible stocks and ineligible stocks. That is, liquidity and returns for eligible stocks lead those of the ineligible stocks. This lead-lag effect persists under different market conditions. In addition, the lead-lag effect in liquidity is stronger when investors are facing constrained funding liquidity which supports the theoretical model of Brunnermeier and Pedersen (2009) which suggests the interaction between funding liquidity and stock liquidity. Secondly, only margin trading has significant impacts on the lead-lag relations. To explain why the margin trading would have impact on lead-lag effects, I proposed three possible mechanisms (i.e., deleverage channel, cross-asset learning channel, and information diffusion channel) and use mediation analysis to test the importance of each mechanism. I found that the deleverage channel accounts for 58.24% (70.73%) of the impacts from margin trading on lead-lag effect in liquidity (return). The information diffusion channel only explains 2.28% (0.86%) of total effect that margin trading has on lead-lag effect in liquidity (return). The cross asset learning channel can explain 39.58% (28.41%) of the impacts of margin trading on lead-lag in liquidity (return). Our study provides the first empirical evidence in literature on the lead-lag relation in liquidity. In addition, it is the first paper that demonstrates the existence of return lead-lag relation at intraday level. Finally, it highlights the role that margin trading played in forming such lead lag relations in both liquidity and return.

93.Learning Density Models via Structured Latent Variables

Author:Xi Yang 2018
Abstract:As one principal approach to machine learning and cognitive science, the probabilistic framework has been continuously developed both theoretically and practically. Learning a probabilistic model can be thought of as inferring plausible models to explain observed data. The learning process exploits random variables as building blocks which are held together with probabilistic relationships. The key idea behind latent variable models is to introduce latent variables as powerful attributes (setting/instrument) to reveal data structures and explore underlying features which can sensitively describe the real-world data. The classical research approaches engage shallow architectures, including latent feature models and finite mixtures of latent variable models. Within the classical frameworks, we should make certain assumptions about the form, structure, and distribution of the data. Since the shallow form may not describe the data structures sufficiently, new types of latent structures are promptly developed with the probabilistic frameworks. In this line, three main research interests are sparked, including infinite latent feature models, mixtures of the mixture models, and deep models. This dissertation summarises our work which is advancing the state-of-the-art in both classical and emerging areas. In the first block, a finite latent variable model with the parametric priors is presented for clustering and is further extended into a two-layer mixture model for discrimination. These models embed the dimensionality reduction in their learning tasks by designing a latent structure called common loading. Referred to as the joint learning models, these models attain more appropriate low-dimensional space that better matches the learning task. Meanwhile, the parameters are optimised simultaneously for both the low-dimensional space and model learning. However, these joint learning models must assume the fixed number of features as well as mixtures, which are normally tuned and searched using a trial and error approach. In general, the simpler inference can be performed by fixing more parameters. However, the fixed parameters will limit the flexibility of models, and false assumptions could even derive incorrect inferences from the data. Thus, a richer model is allowed for reducing the number of assumptions. Therefore an infinite tri-factorisation structure is proposed with non-parametric priors in the second block.  This model can automatically determine an optimal number of features and leverage the interrelation between data and features. In the final block, we introduce how to promote the shallow latent structures model to deep structures to handle the richer structured data. This part includes two tasks: one is a layer-wise-based model, another is a deep autoencoder-based model. In a deep density model, the knowledge of cognitive agents can be modelled using more complex probability distributions. At the same time, inference and parameter computation procedure are straightforward by using a greedy layer-wise algorithm. The deep autoencoder-based joint learning model is trained in an end-to-end fashion which does not require pre-training of the autoencoder network. Also, it can be optimised by standard backpropagation without the inference of maximum a posteriori. Deep generative models are much more efficient than their shallow architectures for unsupervised and supervised density learning tasks. Furthermore, they can also be developed and used in various practical applications.  

94.Upstream network actors' operational capabilities for servitization through service offshoring: Impact on the performance of manufacturers' service offshoring contracts

Author:Zhuang Ma 2020
Abstract:Drawing on the operational capabilities perspective, this thesis aims to investigate  how upstream network actors (manufacturers’ service delivery centres & local service specialists) contribute to manufacturers’ operational capabilities through captive offshoring and offshore outsourcing contracts, and how these capabilities influence manufacturers’ service offshoring performance. To address this research aim, this thesis adopts a mixed-methods research design integrating qualitative and quantitative examinations. The qualitative study conducts 26 semi-structured interviews with senior managers in service offshoring companies to explore and identify operational capabilities contributed by manufacturers’ offshore upstream network actors. Thematic analysis to the qualitative data identifies seven operational capabilities from manufacturers’ captive offshoring and offshore outsourcing (i.e. ‘process improvement’ (‘PI’), ‘scalable service-enabling technology’ (‘SST’), ‘scalable and well-trained service talents’ (‘SWS’), ‘service and process innovation’ (‘SPI’), ‘product/service customisation’ (‘PSC’), ‘in-country relationship management’ (‘IRM’) and ‘security and IP protection protocols’ (‘SIP’)). The subsequent quantitative study proposes seven hypotheses regarding the contributions of seven operational capabilities on manufacturers’ service offshoring performance, as well as the moderating effect of service offshoring modes on these relationships. Through a large-scale survey in five cities of the Yangtze River Delta region of China, the research collects 360 sets of responses from 1734 firms involved in manufacturers’ service offshoring contracts. Hierarchical multiple regression analysis confirms that 1) all capabilities contribute to manufacturers’ service offshoring performance and 2) service offshoring mode only moderates the relationships between each of the three operational capabilities (i.e. ‘SST’, ‘SWS’ and ‘SPI’) and performance. This thesis makes four major theoretical contributions. First, it focuses on manufacturers’ offshore upstream network and discusses the uniqueness of the identified operational capabilities, which complement the downstream capabilities in the servitization literature. Second, it evaluates the importance of operational capabilities to manufacturers’ service offshoring contracts. Third, this thesis provides an alternative perspective (other than transaction costs) to explain manufacturers’ service offshoring choices, given that ‘SST’ is more important for captive offshoring (Mode 1), while ‘SWS’ and ‘SPI’ are more important for offshore outsourcing (Mode 2). Fourth, the qualitative stage of this thesis identifies in-country outsourcing as a new mode of offshoring (Mode 3) which updates our understanding of manufacturers’ service offshoring arrangements and suggests further investigation. This thesis also provides important practical implications. First, servitizing manufacturers should consider the transferability of specific operational capabilities when choosing service offshoring modes. Second, service delivery centres should work with local service specialists for operational capabilities development. Third, local service specialists should understand the capability requirements of manufacturers & service delivery centres and develop mutual trust with them. Fourth, local authorities should consider developing a comprehensive set of infrastructure and environment to attract investors from the service offshoring sector. Despite the author’s efforts, this study is subject to several limitations which require future research, such as developing objective measures for the performance of manufacturers’ service offshoring contracts, considering both upstream and downstream network actors of manufacturers’ servitization activities, and comparing onshore and offshore servitization.

95.Machine learning based trading strategies for the Chinese stock market

Author:Juan Du 2020
Abstract:This thesis focuses on the machine learning based trading strategies of China Exchange Traded Funds (ETFs). Machine learning and artificial intelligence (AI) provide an innovative level of service for financial forecasting, customer service and data security. Through the development of automated investment advisors powered by machine learning technology, financial institutions such as JPMorgan, the Bank of America and Morgan Stanley have recently achieved AI investment forecasting. This thesis intends to provide original insights into machine learning based trading strategies by producing trading signals based on forecasts of stock price movements.   Theories and models associated with algorithmic trading, price forecasting and trading signal generation are considered; in particular machine learning models such as logistic regression, support vector machine, neural network and ensemble learning methods. Each potential profitable strategy of the China ETFs is tested, and the risk-adjusted returns for corresponding strategies are analysed in detail.   The primary aim of this thesis is to develop two machine learning based trading strategies, in which machine learning models are utilised to predict trading signals. Each machine learning model and their combinations are employed to generate trading signals according to one day ahead forecasts, demonstrating that the final excess return does not cover the transaction costs. This encourages us to reduce the number of unprofitable trades in the trading system by adopting the 'multi-day forecasts' in place of the 'one day ahead forecasts'. Therefore, investors benefit from a longer prediction horizon, in which more predicted information of the total number of upward (or downward) price movements is provided. Investors can make trading decisions based on the majority of the predicted trading signals within the prediction horizon. Moreover, this method of trading rules is consistent with the industry practice. The strategy is flexible to allow risk-averse investors and risk-loving investors to make different trading decisions.   A multi-day forecast based trading system through random forest yields positive risk-adjusted returns after transaction costs. It is identified that it is possible that some machine learning techniques can successfully assist individuals in navigating their decision-making activities.

96.The long-term dynamical evolution of planetary-mass objects in star clusters

Author:Francesco Maria Flammini Dotti 2021
Abstract: The search for exo-planetary systems has seen tremendous progress in recent decades, and has resulted in astounding discoveries. From the discovery of the first confirmed exoplanet orbiting around a main sequence star in 1995, astronomers have attempted to measure and explain the characteristics of exo-planetary systems. Due to observational constraints, most of the discovered planetary systems were detected orbiting nearby field stars. To fully understand the formation and early evolution of planetary systems, it is necessary to study planetary systems in dense stellar environment, the birth places of stars. In these environments, gravitational interactions with neighbouring stars can substantially a?ect the architecture of planetary systems. Most stars, perhaps all stars in the Galaxy, formed in crowded environments. These stellar aggregates typically dissolve within ten million years, while others remain gravitationally bound for millions to billions of years in the open clusters or globular clusters that are present in our Milky Way today. It is now commonly accepted that a large fraction of stars in our Milky Way host planetary companions. To backtrack the origin and dynamical evolution of exoplanets, it is necessary to carefully study the effects of the environments in which these planetary systems spent their youth, and that of the Galactic field, open clusters, or globular clusters, in which they may spend the remaining part of their lives. In this work we analyse how different environments affect the dynamical evolution of planetary systems and free-floating planets. We analyse the effect of the star cluster environment on the evolution of planetary systems, by varying the initial stellar density of the star cluster, by studying the influence of an intermediate-mass black hole (IMBH) in the cluster centre, and by varying other star cluster properties (e.g., global rotation and virial ratio). We focus on the evolution of multi-planet planetary systems, rogue planets (i.e., planets not gravitationally bound to a star) and single-planet systems with a proto-planetary disk. We find that the star cluster environment can have a significant influence on the dynamics of planetary systems. Generally, the disruption rate of planetary systems is higher when (i) the star cluster is denser, (ii) when encountering stars have speeds comparable to the orbital speed of the planets, (iii) when the encounter is more impulsive (i.e., smaller distances between encountering stars and planets) and (iv) for encountering stars with near-parabolic trajectories. Planet- planet scattering, induced by encounters with neighbouring stars, plays a dominant role in shaping the evolution and final architecture of a multi-planet system. Disruption of planetary systems occurs more frequently in the presence of a IMBH, notably during the early phases of star cluster evolution (before the cluster fills its Roche lobe). The presence of a central IMBH enhances the ejection rate of stars and free-floating planets from the star cluster, while the presence of global rotation in the star cluster reduces the ejection rate of stars and free-floating planets from the star cluster.

97.An experimental and numerical study on the impact of wind-induced turbulence on gaseous dispersion in porous media

Author:Alireza Pourbakhtiar 2018
Abstract:This research focuses on how wind turbulence influences gas transport in the porous media. It can be useful in measuring the amount of greenhouse gasses from subsurface to atmosphere or a hazardous gas like Radon emission into buildings. It can also be important in other fields of research, anywhere that gas transports through porous media. A novel experimental arrangement is demonstrated for measuring wind turbulence- induced gas transport in dry porous media under controlled conditions. This equipment was used to measure the effect of wind turbulence on gas transport (quantified as a dispersion coefficient) as a function of distance to the surface of the porous medium exposed to wind. Two different methods for the measurement of wind-induced gas transport were compared. . In one of approaches, which is a modified version of other one, five sensors are placed inside the sample of porous material at same intervals which can measure the oxygen concentration values. Approaches are used for measuring diffusion and wind-induced dispersion. Tracer gases of O2 and CO2 with average vertical (perpendicular to the surface of porous medium) wind speeds of 0.02 to 1.06 m s-1 were applied at room temperature condition. Five different fractions of soil are utilized to find out how the particle size can affect the gas transport in a specific 2 wind condition at the surface of soil as the porous media. It is shown that gas dispersion was 20–100 times higher due to wind action. Ten wind conditions (plus calm condition with zero wind speed) are selected and three perpendicular components of wind as well as wind fluctuations are characterized. Oxygen breakthrough curves as a function of distance to the wind-exposed surface of the porous medium were analysed numerically with a finite-difference based model to assess gas transport. Potential relationships between breakthrough time and wind speed characteristics in terms of average wind speed, wind speed standard deviation and wind speed power spectrum properties in three dimensions were investigated. Statistical analyses indicated that the wind speed had a very significant impact on breakthrough time and that the characteristics for the wind speed component perpendicular to the porous medium surface were especially important. For the experiments, the penetration depth (Z50) is introduced. Linear inverse relation between penetration depth and empirical factor is determined. Wind characteristics can affect the gas transport speed and penetration depth inside porous media for particle sizes above 1mm. At particle size below 0.5 mm the effect of wind on gas transport is negligible. The relation between different wind speed characteristics such as wind speed or its power spectrumand particle shape and size on gas transport is analysed. The main component of wind which affects the gas transport was found to be the vertical one. An expression (Eq. 26) for calculating the wind-induced dispersion coefficient has been developed which is dependent on wind speed. The direct calculation of the empirical factors and wind induced dispersion coefficient of porous media at the surface is more accurate by fitting the empirical and numerical parameters.

98.Agricultural Straw Fibre Reinforced Concrete for Potential Industrial Ground-Floor Slab Application

Author:Bhooma Nepal 2020
Abstract:The primary objective of this research was to advance, through experimental research, knowledge on the use of agricultural straw fibre reinforcement in concrete. The focus is on the manufacture of straw composites, development of concrete matrix and investigation of concrete samples by various tests and standards to assess suitability for using in ground-floor slab application. Synthetic fibres such as steel and polypropylene used in construction industry are not only expensive, but carbon emissions produced during their manufacture and non-renewability of such fibres have been a big challenge in the construction industry. Due to recent trends and growth towards sustainable building materials, the focus of this research was into the use of straw fibres, which are a by-product of crops and are produced in large quantities. Straw also do not have significant economic usage and generally is disposed of by farmers often by open air burning. This practice has caused huge air pollution and deteriorates health of many people all over the world. Development of straw composites through this research not only helps to utilize the straw that are not utilized currently for any economic benefits, but also prevents unsafe disposal. This in fact leads to reduced greenhouse gas emissions by reducing open air burning, use of biodegradable locally available material and replace synthetic non-renewable fibres in construction practices. The composite fibres developed embodies a sustainable path for future researchers and fibre manufacturers towards a clean construction industry. Both rice and wheat straw fibres treated with boiled water displayed increase in tensile strength. There was increase in strength by 38% and 55% respectively as compared to its raw state. However, the tensile strength was not sufficient enough to form a stronger bond with concrete as a replacement of commercial fibres. Hence composite fibres were manufactured and tested that comprise of straw fibres mixed with different polymer compounds. Composite fibres with up to 35% straw fibre content was determined to be optimum fibre reinforcement in concrete. These composite fibres have similar tensile strength and ductility characteristics as industrially available synthetic fibres. For 1% volume fraction of concrete of straw composite fibre, the residual tensile strength was 1.88 MPa at 0.47 mm deflection of the beam and 1.33 MPa for 3.02 mm deflection. Through the successful completion of development of several series of straw polymer composite fibres, this study demonstrates that the use of straw fibres can be a viable alternative to synthetic fibres. These fibres are not only easy to manufacture and cost effective, they help to conserve energy, have higher design flexibility and reduce the emission of greenhouse gases.

99.Development of Multiphase and Multiscale Mathematical Models for Liquid Feedstock Thermally Sprayed Thermoelectric Coatings

Author:Ebrahim Gozali 2016
Abstract:The manufacture of nanostructured coatings by thermal spraying is currently a subject of increasing research efforts in order to obtain unique and often enhanced properties compared to conventional coatings. High Velocity Suspension Flame Spraying (HVSFS) has recently appeared as a potential alternative to conventional High Velocity Oxygen-Fuel (HVOF) spraying: for the processing of nanostructured spray material to achieve dense surface layers in supersonic mode with a refined structure, from which superior physical and mechanical properties are expected. The aim of this thesis is to, first, apply CFD methods to analyse the system characteristics of high speed thermal spray coating processes in order to improve the technology and advance the quality and efficiency of the HVSFS process. The second aim is to analyse heat transfer in thin films and thermoelectric thin films. The first part of this thesis aims to deepen the knowledge on such multidisciplinary process and to address current drawbacks mainly due to cooling effects and reduction of the overall performance of the spray torch. In this matter, a detailed parametric study carried out to model and analyse the premixed (propane/oxygen) and non-premixed (ethanol/oxygen) combustion reactions, the gas flow dynamics of HVSFS process, the interaction mechanism between the gas and liquid droplet including disintegration and vaporization, and finally investigation of the droplet injection point (axially, transversely, and externally), at the example of an industrial DJ2700 torch (Sulzer-Metco, Wohlen, Switzerland). The numerical results reveal that the initial mass flow rate of the liquid feedstock mainly controls the HVSFS process and the radial injection schemes are not suitable for this system. The second part of this thesis focuses on investigating the effects of solvent composition and type on the liquid droplet fragmentation and evaporation, combustion, and HVSFS gas dynamics. Here the solvent mixture is considered as a multicomponent droplet in the numerical model. The numerical results can be considered as a reference for avoiding extraneous trial and error experimentations. It can assist in adjusting spraying parameters e.g. the ratio or percentage of solvents for different powder materials, and it can give a way of visualization of the phenomena occurring during liquid spray. In the third part, effects of solid nanoparticle content on liquid feedstock trajectory in the HVSFS are investigated. Theoretical models are used to calculated thermo-physical properties of liquid feedstock. Various solid nanoparticle concentrations in suspension droplets with different diameters are selected and their effects on gas dynamics, vaporization rate and secondary break up are investigated. It is found out that small droplets with high concentrations are more stable for break up, thereby; vaporization is the dominant factor controlling the process which results in leaving some drops without fully evaporation. However, larger droplets undergo sever fragmentation inside the combustion chamber and release the nanoparticles in the middle of barrel after full evaporation. Finally a heat transfer model is developed for nanoparticles traveling inside thermal spray guns. In the absence of experimental data for Nano-scale inflight particles, the model is validated in thermoelectric thin films as candidate applications of the HVSFS process. For this purpose, one dimensional heat conduction problem in a thin film is investigated through solving three different heat conduction equations, namely: parabolic heat conduction equation (Fourier equation), hyperbolic heat conduction equation (non-Fourier heat conduction), and ballistic-diffusive heat conduction equations. A stable and convergent finite difference scheme is employed to solve the hyperbolic heat conduction (HHC) equation and the ballistic-diffusive equations (BDE). The ballistic part of the BDE is solved with the Gauss-Legendre integration scheme. Then these equations are applied across a thermoelectric thin film to investigate the steady-state and the transient cooling mechanism at the cold junction surface. The numerical results indicate that those equations predicted inaccurate results for the transient heat conduction in a thin film lead to less accurate prediction of cooling at cold side boundary, temperature, and heat flux profile in a thermoelectric film.

100.New Development on Graphene-Contacted Single Molecular Junctions

Author:Qian Zhang 2019
Abstract:Molecular electronics holds great promise to realize the ultimate miniaturization of electronic devices and the investigation of charge transport properties through molecules tethered between pairs of electrode contacts is one of the most active areas of contemporary molecular electronics. To date, metallic materials have been widely used as the electrodes to construct molecular junctions, where desired characteristics are outstanding stability, conductivity, and fabricability. However, there is an increasing realization that new single molecule electrical junction functionality can be achieved through the use of non-metallic electrodes. Fundamental studies suggest that carbon based materials have the potential to be valuable alternative electrode materials for molecular electronics in the next generation of nanostructured devices. In light of the discussion above, we symmetrically investigate the possibility of constructing non-metallic molecular junctions and the corresponding charge transport properties through such junctions by replacing the common gold electrodes with graphene electrodes. We have measured the electrical conductance of a molecular junction based on alkanedithiol/alkanediamine chains sandwiched between a gold and a graphene electrode and compared the effects of anchoring groups in graphene based junctions. We also studied the technical effects of molecule-electrode contacts by comparing methods for capturing and measuring the electrical properties of single molecules in gold?graphene contact gaps. The decay obtained by STM based I(s) and CP-AFM BJ techniques, which is much lower than the one obtained for symmetric gold junctions, is related to the weak coupling at the molecule?graphene interface and the electronic structure of graphene. This asymmetric coupling induces higher conductance for alkanediamine chains than that in the same hybrid metal?graphene molecular junction using thiol anchoring groups. Moreover, we introduce an efficient data sorting algorithm and demonstrate its capacity on real experimental data sets. As a consequence, we suggest that novel 2D materials could sever as promising electrodes to construct nonsymmetric junctions and the use of appropriate anchoring groups/techniques may lead to a much lower decay constant and more conductive molecular junctions at longer lengths.
Total 200 results found
Copyright 2006-2020 © Xi'an Jiaotong-Liverpool University 苏ICP备07016150号-1 京公网安备 11010102002019号