Find Research Output

Research Output
  • All
  • Scholar Profiles
  • Research Units
  • Research Output
Filter
Department Publication Year Content Type Data Sources

SELECTED FILTERS

2013
EI
Clear all

1.Some notes on the incompleteness theorem and shape grammars

Author:Theodoros, Dounas

Source:Communications in Computer and Information Science,2013,Vol.369 CCIS

Abstract:The paper presents a critique of the Shape Grammar paradigm viewed through the lens of the incompleteness theorem of Go¨del. Shape Grammars have been extensively researched through many lenses. Their productive systemic nature was the focus of the first papers along with more recent treatises in the field while their use in analysis of known building styles has been extensive and a proven mechanism for style analysis. It is surprising though that use of Shape Grammars in actual design in practice however has been minimal. The architectural community has not actively used the paradigm in the design of real buildings, probably because of the rigid analytical approach to style and rules, following from the academic analysis that the paradigm has been subjected to. However I propose that there is another underlying reason, other than the rigid approach to construct a Shape Grammar. The nature of the concurrent application and creation of the rules lies close to the incompleteness theorem of Go¨del, that uses a multitude of Turing Machines to prove that a from a set of True Axioms-A-we will never be able to determine if all sentences are true, without having to invent new axioms, outside the initial set-A-, thus unproven in terms of their true or false nature. Negation of this possibility drives us to the conclusion that true Design can never be feature-complete and thus can never be placed in a trusted framework that we all agree or believe it to be the complete truth. © 2013 Springer-Verlag Berlin Heidelberg.

2.Segregated Lightweight Dynamic Rate (SLDR) control scheme for efficient internet communications

Author:Ting,T. O.;Ting,H. C.;Lee,Sanghyuk

Source:Lecture Notes in Electrical Engineering,2013,Vol.235 LNEE

Abstract:This paper proposes an effective Segregated Lightweight Dynamic Rate Control Scheme (SLDRCS) over the internet. Based on the feedback analysis of the current approaches, we found that the indicator of the congestion is only the queue length. It only captures a partial indicator of delay and loss in feedback mechanism. This may result in an ineffective way in controlling the network when congestion control occurs. Therefore, we suggest multiple congestion indicators to adapt inside this scheme to fully control the average delay and loss from bidirectional of sender to receiver. The behavior of next event packet being control using discrete event simulation tool with First Come First Serve (FCFS) scheduling policy and we code this algorithm into C programming language. Through the simulation results, our Segregated Lightweight Dynamic Rate Control Scheme (SLDRCS) guaranteed high improvement in packet drop and average delay under various congestion level and traffic load conditions compare with the current approach. © 2013 Springer Science+Business Media Dordrecht.

3.Chebyshev tau meshless method based on the highest derivative for fourth order equations

Author:Shao, WT;Wu, XH

Source:APPLIED MATHEMATICAL MODELLING,2013,Vol.37

Abstract:It is well known that the numerical integration process is much less sensitive than numerical differential process when dealing with the differential equations. After integration, accuracy is no longer limited by that of the slowly convergent series for the highest derivative, but only by that of the unknown function itself. In this paper, a Chebyshev tau meshless method based on the highest derivative (CTMMHD) is developed for fourth order equations on irregularly shaped domains with complex boundary conditions. The problem domain is embedded in a domain of regular shape. The integration and multiplication of Chebyshev expansions are given in matrix representations. Several numerical experiments including standard biharmonic problems, problems with variable coefficients and nonlinear problems are implemented to verify the high accuracy and efficiency of our method. (C) 2012 Elsevier Inc. All rights reserved.

4.Analysis of liquid feedstock behavior in high velocity suspension flame spraying for the development of nanostructured coatings

Author:Gozali, Ebrahim ; Kamnis, Spyros ; Gu, Sai

Source:Proceedings of the International Thermal Spray Conference,2013,Vol.

Abstract:Over the last decade the interest in thick nano-structured layers has been increasingly growing. Several new applications, including nanostructured thermoelectric coatings, thermally sprayed photovoltaic systems and solid oxide fuel cells, require reduction of micro-cracking, resistance to thermal shock and/or controlled porosity. The high velocity suspension flame spray (HVSFS) is a promising method to prepare advanced materials from nano-sized particles with unique properties. However, compared to the conventional thermal spray, HVSFS is by far more complex and difficult to control because the liquid feedstock phase undergoes aerodynamic break up and vaporization. The effects of suspension droplet size, injection velocity and mass flow rate were parametrically studied and the results were compared for axial, transverse and external injection. The numerical simulation consists of modeling aerodynamic droplet break-up and evaporation, heat and mass transfer between liquid droplets and gas phase.

5.Unveiling the dynamics in RNA epigenetic regulations

Author:Meng, J;Cui, XD;Liu, H;Zhang, L;Zhang, SW;Rao, MK;Chen, YD;Huang, YF

Source:2013 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM),2013,Vol.

Abstract:Despite the prevalent studies of DNA/Chromatin related epigenetics, such as, histone modifications and DNA methylation, RNA epigenetics did not receive deserved attention due to the lack of high throughput approach for profiling epitranscriptome. Recently, a new affinity-based sequencing approach MeRIPseq was developed and applied to survey the global mRNA N6-methyladenosine (m(6)A) in mammalian cells. As a marriage of ChIPseq and RNAseq, MeRIPseq has the potential to study, for the first time, the transcriptome-wide distribution of different types of post-transcriptional RNA modifications. Yet, this technology introduced new computational challenges that have not been adequately addressed. We have previously developed a MATLAB-based package 'exomePeak' for detection of RNA methylation sites from MeRIPseq data. Here, we extend the features of exomePeak by including a novel computational framework that enables differential analysis to unveil the dynamics in RNA epigenetic regulations. The novel differential analysis monitors the percentage of modified RNA molecules among the total transcribed RNAs, which directly reflects the impact of RNA epigenetic regulations. In contrast, current available software packages developed for sequencing-based differential analysis such as DESeq or edgeR monitors the changes in the absolute amount of molecules, and, if applied to MeRIPseq data, might be dominated by transcriptional gene differential expression. The algorithm is implemented as an R-package 'exomePeak' and freely available. It takes directly the aligned BAM files as input, statistically supports biological replicates, corrects PCR artifacts, and outputs exome-based results in BED format, which is compatible with all major genome browsers for convenient visualization and manipulation. Examples are also provided to depict how exomePeak R-package is integrated with exiting tools for MeRIPseq based peak calling and differential analysis. Particularly, the rationales behind each processing step as well as the specific method used, the best practice, and possible alternative strategies are briefly discussed. The algorithm was applied to the human HepG2 cell MeRIPseq data sets and detects more than 16000 RNA m(6)A sites, many of which are differentially methylated under ultraviolet radiation. The challenges and potentials of MeRIPseq in epitranscriptome studies are discussed in the end.

6.A resource-centric architecture for service-oriented cyber physical system

Author:Wan, Kaiyu ; Alagar, Vangalur

Source:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),2013,Vol.7861 LNCS

Abstract:The strategic application domains of Cyber Physical Systems (CPS) [7,6] include health care, transportation, managing large-scale physical infrastructures, and defense systems (avionics). In all these applications there is a need to acquire reliable resources in order to provide trustworthy services at every service request context. Hence we view CPS as a large distributed highway for services and supply chain management. In traditional service-oriented systems service, but not resource, is a first class entity in the architecture model and resources are assumed to be available at run time to provide services. However resource quality and availability are determining factors for timeliness and trustworthiness of CPS services, especially during emergencies. So in the service-oriented view of CPS discussed in this paper we place services around resources, because resource constrain service quality. We investigate a resource-centric, and context-dependent model for service-oriented CPS and discuss 3-tiered architecture for service-oriented CPS in this paper. © 2013 Springer-Verlag.

7.Recursive learning of genetic algorithms with task decomposition and varied rule set

Author:Fang, Lei ; Guan, Sheng-Uei ; Zhang, Haofan

Source:Modeling Applications and Theoretical Innovations in Interdisciplinary Evolutionary Computation,2013,Vol.

Abstract:Rule-based Genetic Algorithms (GAs) have been used in the application of pattern classification (Corcoran & Sen, 1994), but conventional GAs have weaknesses. First, the time spent on learning is long. Moreover, the classification accuracy achieved by a GA is not satisfactory. These drawbacks are due to existing undesirable features embedded in conventional GAs. The number of rules within the chromosome of a GA classifier is usually set and fixed before training and is not problem-dependent. Secondly, conventional approaches train the data in batch without considering whether decomposition solves the problem. Thirdly, when facing large-scale real-world problems, GAs cannot utilise resources efficiently, leading to premature convergence. Based on these observations, this paper develops a novel algorithmic framework that features automatic domain and task decomposition and problem-dependent chromosome length (rule number) selection to resolve these undesirable features. The proposed Recursive Learning of Genetic Algorithm with Task Decomposition and Varied Rule Set (RLGA) method is recursive and trains and evolves a team of learners using the concept of local fitness to decompose the original problem into sub-problems. RLGA performs better than GAs and other related solutions regarding training duration and generalization accuracy according to the experimental results. © 2013 by IGI Global. All rights reserved.

8.Fast kNN graph construction with locality sensitive hashing

Author:Zhang, Yan-Ming ; Huang, Kaizhu ; Geng, Guanggang ; Liu, Cheng-Lin

Source:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),2013,Vol.8189 LNAI

Abstract:The k nearest neighbors (kNN) graph, perhaps the most popular graph in machine learning, plays an essential role for graph-based learning methods. Despite its many elegant properties, the brute force kNN graph construction method has computational complexity of O(n2), which is prohibitive for large scale data sets. In this paper, based on the divide-and-conquer strategy, we propose an efficient algorithm for approximating kNN graphs, which has the time complexity of O(l(d + logn)n) only (d is the dimensionality and l is usually a small number). This is much faster than most existing fast methods. Specifically, we engage the locality sensitive hashing technique to divide items into small subsets with equal size, and then build one kNN graph on each subset using the brute force method. To enhance the approximation quality, we repeat this procedure for several times to generate multiple basic approximate graphs, and combine them to yield a high quality graph. Compared with existing methods, the proposed approach has features that are (1) much more efficient in speed (2) applicable to generic similarity measures; (3) easy to parallelize. Finally, on three benchmark large-scale data sets, our method beats existing fast methods with obvious advantages. © 2013 Springer-Verlag.

9.Integrating context-awareness and trustworthiness in IoT descriptions

Author:Wan, Kaiyu ; Alagar, Vangalur

Source:Proceedings - 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, GreenCom-iThings-CPSCom 2013,2013,Vol.

Abstract:Internet of Things (IoT) refer to a broad spectrum of data, information, knowledge, products, devices, resources and services about whom descriptions of sufficient depth and precision should be published and made available in geographically distributed business networks in order maximize the benefits to industry and business. The stakeholders of IoT should have access to a user-centric, and service-centric framework so that they may query, discover, share, allocate, and exchange the IoTs, in order to achieve their business goals and maximize the economic value. This is possible if the IoTs are available, reliable, secure, and safe in all contexts. An added efficiency is achieved by introducing context-awareness for the IoTs in order that the IoTs for which accurate descriptions are available can be obtained at the right time, for the right price, and at the right location. In this paper, a framework meeting these objectives is proposed. Contexts and context-awareness, and trustworthiness issues are rigorously discussed and integrated with IoT descriptions. © 2013 IEEE.

10.A soft-switching post-regulator for multi-outputs dual forward DC/DC converter with tight output voltage regulation

Author:Su, B;Wen, HQ;Zhang, JM;Lu, ZY

Source:IET POWER ELECTRONICS,2013,Vol.6

Abstract:An improved soft-switching post-regulator topology for multi-outputs dual forward DC/DC converter is presented. A delay-trailing modulation method is proposed. ZVSZCS on/zero-current-switching off conditions can be realised on both the primary MOSFETs and the secondary rectifying diodes. Excellent decoupling among different outputs and tight output voltage regulation are achieved. The efficiency is improved due to single power-conversion stage and share in the primary components. The operating principle and main feature of this improved topology are analysed. Key design issue including zero-voltage-switching operation, maximum duty ratio of the primary side MOSFETs and parameter determination for the primary magnetising inductor and the secondary additional redistribution capacitor are discussed. Finally, an experimental prototype with two outputs (300-400 V input, 48 V, 6.5 A and 24 V, 11 A outputs) is built to verify the theoretical analysis. The measured efficiency at normal operation input voltage (400 V) is improved by about 0.5-2%%, and the measured efficiency under light loads is improved by more than 2%%.

11.Integration of gene expression, genome wide DNA methylation, and gene networks for clinical outcome prediction in ovarian cancer

Author:Zhang, L;Liu, H;Meng, J;Wang, XS;Chen, YD;Huang, YF

Source:2013 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM),2013,Vol.

Abstract:Integrative clinical outcome prediction model called gene interaction regularized elastic net (GIREN) method is proposed in this paper. GIREN combines gene expression, methylation profiles, and gene interaction networks in order to reveal genomic and epigenomic features that bear important prognostic value. With GIREN, gene expression and DNA methylation profiles are first jointly analyzed in a linear regression model, and additional gene interaction network is simultaneously integrated as a regularizing penalty that follow an elastic net formulation. Such regularization also enforce sparsity in the solution so that features with prognostic values are automatically selected. To solve the regularized optimization, an iterative gradient descent algorithm is also developed. We applied GIREN to a set of 87 human ovarian cancer samples, which underwent a rigorous sample selection. The predicted outcome was used to group patients into high-risk vs. low-risk. Validation showed that GIREN outperformed other competing algorithms including SuperPCA.

12.DRAWING THE INVISIBLE: VISUALIZING PERSONAL SPACES

Author:Ivanovic, GW

Source:PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON COMPUTER-AIDED ARCHITECTURAL DESIGN RESEARCH IN ASIA (CAADRIA 2013): OPEN SYSTEMS,2013,Vol.

Abstract:The present research discusses the importance of human activity as a place-making coordinate, and proposes the Activity Counter Maps (ACM) as a methodology for visualizing people's social spaces. Through two case studies, the ACM were tested for creating representations of both intensity of occupation in public spaces and people's public distances combined into a unified "three-dimensional public shape". The research analyses the resulted images and discusses its possible applications for digital design.

13.Real-time compliance control of an assistive joint using QNX operating system

Author:Gu,Shuang;Wu,Cheng Dong;Yue,Yong;Maple,Carsten;Li,Da You;Liu,Bei Sheng

Source:International Journal of Automation and Computing,2013,Vol.10

Abstract:An assistive robot is a novel service robot, playing an important role in the society. For instance, it can amplify human power not only for the elderly and disabled to recover/rehabilitate their lost/impaired musculoskeletal functions but also for healthy people to perform tasks requiring large forces. Consequently, it is required to consider both accurate position control and human safety, which is the compliance. This paper deals with the robot control compliance problem based on the QNX real-time operating system. Firstly, the mechanical structure of a compliant joint on the assistive robot is designed using Solidworks. Then the parameters of the assistive robot system are identified. The software of robot control includes data acquisition and processing, and control to meet the compliance requirement of the joint control. Finally, a Hogan impedance control experiment is carried out. The experimental results prove the effectiveness of the method proposed. © 2013 Institute of Automation, Chinese Academy of Sciences and Springer-Verlag Berlin Heidelberg.

14.RF characteristics of wireless capsule endoscopy in human body

Author:Zhang,Meng;Lim,Eng Gee;Wang,Zhao;Tillo,Tammam;Man,Ka Lok;Wang,Jing Chen

Source:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),2013,Vol.7861 LNCS

Abstract:Wireless capsule endoscopy (WCE) is an ingestible electronic diagnostic device capable of working wirelessly, without all the limitations of traditional wired diagnosing tools, such as cable discomfort and the inability to examine highly convoluted sections of the small intestine. However, this technique is still encountering a lot of practical challenges and requires further improvements. This paper is to propose the methodology of investigating the performance of a WCE system by studying its electromagnetic (EM) wave propagation through the human body. Based on this investigation, the capsule's positioning information can be obtained. The WCE transmission channel model is constructed to evaluate signal attenuations and to determine capsule position. The detail of this proposed research methodology is presented in this paper. © 2013 Springer-Verlag.

15.Modeling resource-centric services in cyber physical systems

Author:Wan, Kaiyu ; Alagar, Vangalur

Source:Lecture Notes in Engineering and Computer Science,2013,Vol.2203

Abstract:A service-oriented view of CPS is taken in this paper, because it is a good platform for managing global supply chain management, service acquisition and service provision. Complex services are enabled by a strong influence between computational and physical components that might be globally distributed. A necessary condition for such service delivery is that resources required for complex services are of high quality and are available at service execution times. In a resourcecentric service model, both resource quality and service quality using that resource are explicitly stated. In order to make resource quality visible, resource providers will publish a faithful description of the resources and service providers will publish trustworthy service descriptions which explicitly mention the resources used by them. In this paper a cascaded specification approach is discussed for describing resource types, services offered by resource, and a cyber configured service that package physical services.

16.Numerical investigation of combustion and liquid feedstock in high velocity suspension flame spraying process

Author:Gozali, E;Kamnis, S;Gu, S

Source:SURFACE & COATINGS TECHNOLOGY,2013,Vol.228

Abstract:Over the last decade the interest in thick nano-structured layers has been increasingly growing. Several new applications, including nanostructured thermoelectric coatings, thermally sprayed photovoltaic systems and solid oxide fuel cells, require reduction of micro-cracking, resistance to thermal shock and/or controlled porosity. The high velocity suspension flame spray (HVSFS) is a promising method to prepare advanced materials from nano-sized particles with unique properties. However, compared to the conventional thermal spray, HVSFS is by far more complex and difficult to control because the liquid feedstock phase undergoes aerodynamic break up and vaporization. The effects of suspension droplet size, injection velocity and mass flow rate were parametrically studied and the results were compared for axial, transverse and external injection. The model consists of several sub-models that include pre-mixed combustion of propane-oxygen, non-premixed ethanol-oxygen combustion, modeling aerodynamic droplet break-up and evaporation, heat and mass transfer between liquid droplets and gas phase. Thereby, the models are giving a detailed description of the relevant set of parameters and suggest a set of optimum spray conditions serving as a fundamental reference to further develop the technology. (C) 2013 Elsevier B.V. All rights reserved.

17.Design and performance evaluation of a bidirectional isolated dc-dc converter with extended dual-phase-shift scheme

Author:Wen, HQ;Su, B;Xiao, WD

Source:IET POWER ELECTRONICS,2013,Vol.6

Abstract:This study describes the design and performance evaluation of a bidirectional isolated dc-dc converter with an extended dual-phase-shift (EDPS) scheme. The operation principle and equivalent circuits with consideration of the deadband are presented. The deadband effect with EDPS is different from the conventional phase-shift (CPS) scheme, and the corresponding compensation coefficient is determined. Different operation modes are identified with respect to phase-shift angles of EDPS and load conditions. The safe operational area is also analysed with the comparison of different operation modes. The output voltage and output power characteristics with open-loop or closed-loop operation are discussed. The average theoretical 48.5%% reduction in the output voltage ripple using EDPS has been achieved. The average reduction in inductor peak and rms with EDPS are statistically calculated as 37.8 and 26.8%%. The measured efficiency has improved from 68.1%% using CPS to 81.9%% using EDPS for low-power application.

18.Novel Wireless Capsule Endoscopy diagnosis system with adaptive image capturing rate

Author:Jin, Zhi ; Tillo, Tammam ; Lim, Eng Gee ; Wang, Zhao ; Xiao, Jimin

Source:VISAPP 2013 - Proceedings of the International Conference on Computer Vision Theory and Applications,2013,Vol.1

Abstract:Wireless Capsule Endoscopy (WCE) is a device used to diagnose the gastrointestinal (GI) track, and it is one of the most used tools to inspect the small intestine. Inspection by WCE is non-invasive, and consequently it is more popular if compared to other methods that are traditionally adopted in the examination of GI track. From the point of view of the physicians, WCE is a favorable approach in increasing both the efficiency and the accuracy of the diagnosis. The most significant drawback of WCE is the time consumption for a physician to check all the frames taken in the GI track, in fact it is too long, and could be up to 4 hours. Many anomaly-based techniques were proposed to help physician shorten the diagnosis time, however, these techniques still suffer from high false alarm rate, which limits their actual use. Therefore, in this paper we propose a two stage diagnosis system that firstly uses a normal capsule to capture the whole GI track, and then we use an automatic detection technique that detects anomalies with high false alarm rate. The low specificity of the first capsule ensures that no anomalies will be missed in the first stage of the process. The second stage of the proposed diagnosis system uses a different capsule with adaptive image capturing rate to re-capture the GI tract. In this stage the capsule will use high image capturing rate for segments of GI tract where an anomaly was detected in the first stage, whereas, in the other segments of the GI tract a lower image capturing rate will be used in order to have better use of the second capsule's battery. Consequently, the second generated video, which will be inspected by the physician, will have higher resolution sequence around the areas with suspected lesion.

19.Gravitational Co-evolution and Opposition-based Optimization Algorithm

Author:Lou, Y;Li, JL;Shi, YH;Jin, LP

Source:INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS,2013,Vol.6

Abstract:In this paper, a Gravitational Co-evolution and Opposition-based Optimization (GCOO) algorithm is proposed for solving unconstrained optimization problems. Firstly, under the framework of gravitation based co-evolution, individuals of the population are divided into two subpopulations according to their fitness values (objective function values), i.e., the elitist subpopulation and the common subpopulation, and then three types of gravitation-based update methods are implemented. With the cooperation of opposition-based operation, the proposed algorithm conducts the optimizing process collaboratively. Three benchmark algorithms and fifteen typical benchmark functions are utilized to evaluate the performance of GCOO, where the substantial experimental data shows that the proposed algorithm has better performance with regards to effectiveness and robustness in solving unconstrained optimization problems.

20.Designer's dilemma

Author:Spaeth,A. Benjamin

Source:Open Systems - Proceedings of the 18th International Conference on Computer-Aided Architectural Design Research in Asia, CAADRIA 2013,2013,Vol.

Abstract:Performance based design systems are characterised through the use of performance related evaluation methods or by providing design environments which are restricting the design space according to performance criteria. The performance of a design can be evaluated by numerical simulations. With the use of numerical simulations a fundamental dilemma appears: the precision implied in numerical simulations and the imprecision of the design process itself are systematic contradictions. User control or user interaction in open systems places the user into charge of the imprecision required by the design process. In closed systems, as the below described evolutionary system, methods of imprecision have to be integrated i.e. into the precise simulation based evaluation procedure. Through tolerant selection methods and the gradual evaluation of individuals the rigid and precise system can be guided towards a design system rather than an optimisation system. Due to technical requirements which are related both to the fact of using computer systems but also to the systematic conditions implied to simulations the use of the tolerant selection methods is limited.
Total 76 results found
Copyright 2006-2020 © Xi'an Jiaotong-Liverpool University 苏ICP备07016150号-1 京公网安备 11010102002019号