17,710 research outputs found

    3-Col problem modelling using simple kernel P systems

    Get PDF
    This paper presents the newly introduced class of (simple) kernel P systems ((s)kP systems) and investigates through a 3-colouring problem case study the expressive power and efficiency of kernel P systems. It describes two skP systems that model the problem and analyses them in terms of efficiency and complexity. The skP models prove to be more succinct (in terms of number of rules, objects, number of cells and execution steps) than the corresponding tissue P system, available in the literature, that solves the same problem, at the expense of a greater length of the rules.Ministerio de Ciencia e Innovación TIN2009–13192Junta de Andalucía P08-TIC-0420

    Scheduling Dimension Reduction of LPV Models -- A Deep Neural Network Approach

    Get PDF
    In this paper, the existing Scheduling Dimension Reduction (SDR) methods for Linear Parameter-Varying (LPV) models are reviewed and a Deep Neural Network (DNN) approach is developed that achieves higher model accuracy under scheduling dimension reduction. The proposed DNN method and existing SDR methods are compared on a two-link robotic manipulator, both in terms of model accuracy and performance of controllers synthesized with the reduced models. The methods compared include SDR for state-space models using Principal Component Analysis (PCA), Kernel PCA (KPCA) and Autoencoders (AE). On the robotic manipulator example, the DNN method achieves improved representation of the matrix variations of the original LPV model in terms of the Frobenius norm compared to the current methods. Moreover, when the resulting model is used to accommodate synthesis, improved closed-loop performance is obtained compared to the current methods.Comment: Accepted to American Control Conference (ACC) 2020, Denve

    A Reconfigurable Vector Instruction Processor for Accelerating a Convection Parametrization Model on FPGAs

    Full text link
    High Performance Computing (HPC) platforms allow scientists to model computationally intensive algorithms. HPC clusters increasingly use General-Purpose Graphics Processing Units (GPGPUs) as accelerators; FPGAs provide an attractive alternative to GPGPUs for use as co-processors, but they are still far from being mainstream due to a number of challenges faced when using FPGA-based platforms. Our research aims to make FPGA-based high performance computing more accessible to the scientific community. In this work we present the results of investigating the acceleration of a particular atmospheric model, Flexpart, on FPGAs. We focus on accelerating the most computationally intensive kernel from this model. The key contribution of our work is the architectural exploration we undertook to arrive at a solution that best exploits the parallelism available in the legacy code, and is also convenient to program, so that eventually the compilation of high-level legacy code to our architecture can be fully automated. We present the three different types of architecture, comparing their resource utilization and performance, and propose that an architecture where there are a number of computational cores, each built along the lines of a vector instruction processor, works best in this particular scenario, and is a promising candidate for a generic FPGA-based platform for scientific computation. We also present the results of experiments done with various configuration parameters of the proposed architecture, to show its utility in adapting to a range of scientific applications.Comment: This is an extended pre-print version of work that was presented at the international symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART2014), Sendai, Japan, June 911, 201

    Support Vector Machine classification of strong gravitational lenses

    Full text link
    The imminent advent of very large-scale optical sky surveys, such as Euclid and LSST, makes it important to find efficient ways of discovering rare objects such as strong gravitational lens systems, where a background object is multiply gravitationally imaged by a foreground mass. As well as finding the lens systems, it is important to reject false positives due to intrinsic structure in galaxies, and much work is in progress with machine learning algorithms such as neural networks in order to achieve both these aims. We present and discuss a Support Vector Machine (SVM) algorithm which makes use of a Gabor filterbank in order to provide learning criteria for separation of lenses and non-lenses, and demonstrate using blind challenges that under certain circumstances it is a particularly efficient algorithm for rejecting false positives. We compare the SVM engine with a large-scale human examination of 100000 simulated lenses in a challenge dataset, and also apply the SVM method to survey images from the Kilo-Degree Survey.Comment: Accepted by MNRA

    Structural modelling and testing of failed high energy pipe runs: 2D and 3D pipe whip

    Get PDF
    Copyright @ 2011 ElsevierThe sudden rupture of a high energy piping system is a safety-related issue and has been the subject of extensive study and discussed in several industrial reports (e.g. [2], [3] and [4]). The dynamic plastic response of the deforming pipe segment under the blow-down force of the escaping liquid is termed pipe whip. Because of the potential damage that such an event could cause, various geometric and kinematic features of this phenomenon have been modelled from the point of view of dynamic structural plasticity. After a comprehensive summary of the behaviour of in-plane deformation of pipe runs [9] and [10] that deform in 2D in a plane, the more complicated case of 3D out-of-plane deformation is discussed. Both experimental studies and modelling using analytical and FE methods have been carried out and they show that, for a good estimate of the “hazard zone” when unconstrained pipe whip motion could occur, a large displacement analysis is essential. The classical, rigid plastic, small deflection analysis (e.g. see [2] and [8]), is valid for estimating the initial failure mechanisms, however it is insufficient for describing the details and consequences of large deflection behaviour

    Conserved- and zero-mean quadratic quantities in oscillatory systems

    No full text
    We study quadratic functionals of the variables of a linear oscillatory system and their derivatives. We show that such functionals are partitioned in conserved quantities and in trivially- and intrinsic zero-mean quantities. We also state an equipartition of energy principle for oscillatory systems

    Multiphase procedure for landscape reconstruction and their evolution analysis. GIS modelling for areas exposed to high volcanic risk

    Get PDF
    This paper – focussed on the province of Naples, where many municipalities with a huge demographic and building density are subject to high volcanic risk owing to the presence of the Campi Flegrei (Phlegrean Fields) caldera and the Somma-Vesuvius complex – highlights the methodological-applicative steps leading to the setting up of a multiphase procedure for landscape reconstruction and their evolution analysis. From the operational point of view, the research led to the: (1) digitalisation, georeferencing and comparison of cartographies of different periods of time and recent satellite images; (2) elaboration and publication of a multilayer Story Map; (3) accurate vectorisation of the data of the buildings, for each period of time considered, and the use of kernel density in 2D and 3D; (4) application of the extrusion techniques to the physical aspects and anthropic structures; (5) production of 4D animations and film clips for each period of time considered. A procedure is thus tested made up of preparatory sequences, leading to a GIS modelling aimed at highlighting and quantifying significant problem areas and high exposure situations and at reconstructing the phases which in time have brought about an intense and widespread growth process of the artificial surfaces, considerably altering the features of the landscape and noticeably showing up the risk values. In a context characterised by land use conflicts and anomalous conditions of anthropic congestion, a diagnostic approach through images in 2D, 3D and 4D is used, with the aim to support the prevention and planning of emergencies, process damage scenarios and identify the main intervention orders, raise awareness and educate to risk, making an impact on the collective imagination through the enhancement of specific geotechnological functionalities of great didactic interest
    corecore