92 research outputs found

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    Modelli e strumenti di programmazione parallela per piattaforme many-core

    Get PDF
    The negotiation between power consumption, performance, programmability, and portability drives all computing industry designs, in particular the mobile and embedded systems domains. Two design paradigms have proven particularly promising in this context: architectural heterogeneity and many-core processors. Parallel programming models are key to effectively harness the computational power of heterogeneous many-core SoC. This thesis presents a set of techniques and HW/SW extensions that enable performance improvements and that simplify programmability for heterogeneous many-core platforms. The thesis contributions cover vertically the entire software stack for many-core platforms, from hardware abstraction layers running on top of bare-metal, to programming models; from hardware extensions for efficient parallelism support to middleware that enables optimized resource management within many-core platforms. First, we present mechanisms to decrease parallelism overheads on parallel programming runtimes for many-core platforms, targeting fine-grain parallelism. Second, we present programming model support that enables the offload of computational kernels within heterogeneous many-core systems. Third, we present a novel approach to dynamically sharing and managing many-core platforms when multiple applications coded with different programming models execute concurrently. All these contributions were validated using STMicroelectronics STHORM, a real embodiment of a state-of-the-art many-core system. Hardware extensions and architectural explorations were explored using VirtualSoC, a SystemC based cycle-accurate simulator of many-core platforms

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Assessment of correlation algorithms and development of an experimental software for measuring glacier displacements from repeat imagery

    Get PDF
    Image matching, or registration, is the process where two or more images are compared to find corresponding areas or objects. Several different methods are used in the approach of quantizing displacements (e.g. cross-correlation methods, Fourier methods, least-square based methods, wavelet based methods (Brown, 1992; Zitova & Flusser, 2003)), and they are being used in a variety of different fields. In geosciences, digital image matching have been used to measure displacements in a range of studies (including mass movements and slope deformations, ice sheet motion, arctic and mountain glacier and rockglacier displacements and terrain model generation). In this thesis, one spatial domain (normalized cross-correlation (NCC)) and two Fourier based image matching methods (phase and gradient correlation) are compared and evaluated based on different parameterizations and several test images covering glaciers and rock glaciers. Geometric and radiometric corrections are considered, as well as pre and post-processing techniques. Additionally, an experimental software including image matching algorithms has been developed. The code development process and implementations are discussed. Three cases have been tested, with several tests in each case. Results showed that, compared to NCC methods, Fourier based methods generally were (1) more robust against snow cover and shadow differences, (2) proved to have better filtering capabilities and (3) processed approximately 3 times as fast. NCC based methods however, allowed for more rotation and deformation of image features in the matching process, but generally achieved a lower signal to noise ratio (SNR) in the results. The implemented quad-tree operator, designed and developed to improve the NCC technique by automatically adjusting the reference window sizes, did not achieve significantly more robust results compared to ordinary NCC methods. Among the algorithms tested in this work, the gradient correlation algorithm is considered the most suitable approach for quantifying glacier displacements from repeat imagery. It is not sensitive to surface cover differences, generally allows for acceptable amounts of image feature deformations, and is one of the fastest algorithms tested. Results from the rock glacier in Muragl valley (images from 1981 and 1994), Tokositna glacier (2000 and 2001) and Columbia glacier (2002), showed a maximum average displacement of respectively 0.46 m/y, 3.1 m/d and 7.5 m/d

    The 1988 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    This publication comprises the papers presented at the 1988 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland on May 24, 1988. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in these proceedings fall into the following areas: mission operations support, planning and scheduling; fault isolation/diagnosis; image processing and machine vision; data management; modeling and simulation; and development tools/methodologies

    Sensor web geoprocessing on the grid

    Get PDF
    Recent standardisation initiatives in the fields of grid computing and geospatial sensor middleware provide an exciting opportunity for the composition of large scale geospatial monitoring and prediction systems from existing components. Sensor middleware standards are paving the way for the emerging sensor web which is envisioned to make millions of geospatial sensors and their data publicly accessible by providing discovery, task and query functionality over the internet. In a similar fashion, concurrent development is taking place in the field of grid computing whereby the virtualisation of computational and data storage resources using middleware abstraction provides a framework to share computing resources. Sensor web and grid computing share a common vision of world-wide connectivity and in their current form they are both realised using web services as the underlying technological framework. The integration of sensor web and grid computing middleware using open standards is expected to facilitate interoperability and scalability in near real-time geoprocessing systems. The aim of this thesis is to develop an appropriate conceptual and practical framework in which open standards in grid computing, sensor web and geospatial web services can be combined as a technological basis for the monitoring and prediction of geospatial phenomena in the earth systems domain, to facilitate real-time decision support. The primary topic of interest is how real-time sensor data can be processed on a grid computing architecture. This is addressed by creating a simple typology of real-time geoprocessing operations with respect to grid computing architectures. A geoprocessing system exemplar of each geoprocessing operation in the typology is implemented using contemporary tools and techniques which provides a basis from which to validate the standards frameworks and highlight issues of scalability and interoperability. It was found that it is possible to combine standardised web services from each of these aforementioned domains despite issues of interoperability resulting from differences in web service style and security between specifications. A novel integration method for the continuous processing of a sensor observation stream is suggested in which a perpetual processing job is submitted as a single continuous compute job. Although this method was found to be successful two key challenges remain; a mechanism for consistently scheduling real-time jobs within an acceptable time-frame must be devised and the tradeoff between efficient grid resource utilisation and processing latency must be balanced. The lack of actual implementations of distributed geoprocessing systems built using sensor web and grid computing has hindered the development of standards, tools and frameworks in this area. This work provides a contribution to the small number of existing implementations in this field by identifying potential workflow bottlenecks in such systems and gaps in the existing specifications. Furthermore it sets out a typology of real-time geoprocessing operations that are anticipated to facilitate the development of real-time geoprocessing software.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (EPSRC) : School of Civil Engineering & Geosciences, Newcastle UniversityGBUnited Kingdo

    Code Generation and Global Optimization Techniques for a Reconfigurable PRAM-NUMA Multicore Architecture

    Full text link

    Speeding-up NCC-based template matching using parallel multimedia instructions

    No full text
    This paper describes the mapping of a recently introduced template matching algorithm based on the Normalized Cross Correlation (NCC) on a general purpose processor endowed with SIMD (Single Instruction Multiple Data) multimedia instructions. The algorithm relies on the Bounded Partial Correlation (BPC) technique, which consists in deploying a sufficient condition to detect unsatisfactory matching candidates at a reduced computational cost. First, we briefly describe the BPC technique and highlight the related expensive computations. Then, based on the analysis of the major SIMD multimedia instruction set extensions available nowadays, we define a processor-independent multimedia instruction set and show how to carry out the most expensive BPC calculations using these pseudo-instructions. Finally, we provide experimental results obtained mapping the proposed algorithm on a mainstream multimedia SIMD instruction set (i.e. MMX). We compare these results with those obtained with the brute force NCC algorithm. The results show that the BPC technique is suited for a parallel SIMD-style mapping and that its effectiveness can be significantly improved using the multimedia instructions available nowadays in most general purpose CPUs
    • …
    corecore