3,488 research outputs found

    Hybrid ACO and TOFA feature selection approach for text classification

    Get PDF
    With the highly increasing availability of text data on the Internet, the process of selecting an appropriate set of features for text classification becomes more important, for not only reducing the dimensionality of the feature space, but also for improving the classification performance. This paper proposes a novel feature selection approach to improve the performance of text classifier based on an integration of Ant Colony Optimization algorithm (ACO) and Trace Oriented Feature Analysis (TOFA). ACO is metaheuristic search algorithm derived by the study of foraging behavior of real ants, specifically the pheromone communication to find the shortest path to the food source. TOFA is a unified optimization framework developed to integrate and unify several state-of-the-art dimension reduction algorithms through optimization framework. It has been shown in previous research that ACO is one of the promising approaches for optimization and feature selection problems. TOFA is capable of dealing with large scale text data and can be applied to several text analysis applications such as text classification, clustering and retrieval. For classification performance yet effective, the proposed approach makes use of TOFA and classifier performance as heuristic information of ACO. The results on Reuters and Brown public datasets demonstrate the effectiveness of the proposed approach. © 2012 IEEE

    Scheduling Dimension Reduction of LPV Models -- A Deep Neural Network Approach

    Get PDF
    In this paper, the existing Scheduling Dimension Reduction (SDR) methods for Linear Parameter-Varying (LPV) models are reviewed and a Deep Neural Network (DNN) approach is developed that achieves higher model accuracy under scheduling dimension reduction. The proposed DNN method and existing SDR methods are compared on a two-link robotic manipulator, both in terms of model accuracy and performance of controllers synthesized with the reduced models. The methods compared include SDR for state-space models using Principal Component Analysis (PCA), Kernel PCA (KPCA) and Autoencoders (AE). On the robotic manipulator example, the DNN method achieves improved representation of the matrix variations of the original LPV model in terms of the Frobenius norm compared to the current methods. Moreover, when the resulting model is used to accommodate synthesis, improved closed-loop performance is obtained compared to the current methods.Comment: Accepted to American Control Conference (ACC) 2020, Denve

    Design of Radio-Frequency Arrays for Ultra-High Field MRI

    Get PDF
    Magnetic Resonance Imaging (MRI) is an indispensable, non-invasive diagnostic tool for the assessment of disease and function. As an investigational device, MRI has found routine use in both basic science research and medicine for both human and non-human subjects. Due to the potential increase in spatial resolution, signal-to-noise ratio (SNR), and the ability to exploit novel tissue contrasts, the main magnetic field strength of human MRI scanners has steadily increased since inception. Beginning in the early 1980’s, 0.15 T human MRI scanners have steadily risen in main magnetic field strength with ultra-high field (UHF) 8 T MRI systems deemed to be insignificant risk by the FDA (as of 2016). However, at UHF the electromagnetic fields describing the collective behaviour of spin dynamics in human tissue assume ‘wave-like’ behaviour due to an increase in the processional frequency of nuclei at UHF. At these frequencies, the electromagnetic interactions transition from purely near-field interactions to a mixture of near- and far-field mechanisms. Due to this, the transmission field at UHF can produce areas of localized power deposition – leading to tissue heating – as well as tissue-independent contrast in the reconstructed images. Correcting for these difficulties is typically achieved via multi-channel radio-frequency (RF) arrays. This technology allows multiple transmitting elements to synthesize a more uniform field that can selectively minimize areas of local power deposition and remove transmission field weighting from the final reconstructed image. This thesis provides several advancements in the design and construction of these arrays. First, in Chapter 2 a general framework for modeling the electromagnetic interactions occurring inside an RF array is adopted from multiply-coupled waveguide filters and applied to a subset of decoupling problems encountered when constructing RF arrays. It is demonstrated that using classic filter synthesis, RF arrays of arbitrary size and geometry can be decoupled via coupling matrix synthesis. Secondly, in Chapters 3 and 4 this framework is extended for designing distributed filters for simple decoupling of RF arrays and removing the iterative tuning portion of utilizing decoupling circuits when constructing RF arrays. Lastly, in Chapter 5 the coupling matrix synthesis framework is applied to the construction of a conformal transmit/receive RF array that is shape optimized to minimize power deposition in the human head during any routine MRI examination

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Data science for engineering design: State of the art and future directions

    Get PDF
    Abstract Engineering design (ED) is the process of solving technical problems within requirements and constraints to create new artifacts. Data science (DS) is the inter-disciplinary field that uses computational systems to extract knowledge from structured and unstructured data. The synergies between these two fields have a long story and throughout the past decades, ED has increasingly benefited from an integration with DS. We present a literature review at the intersection between ED and DS, identifying the tools, algorithms and data sources that show the most potential in contributing to ED, and identifying a set of challenges that future data scientists and designers should tackle, to maximize the potential of DS in supporting effective and efficient designs. A rigorous scoping review approach has been supported by Natural Language Processing techniques, in order to offer a review of research across two fuzzy-confining disciplines. The paper identifies challenges related to the two fields of research and to their interfaces. The main gaps in the literature revolve around the adaptation of computational techniques to be applied in the peculiar context of design, the identification of data sources to boost design research and a proper featurization of this data. The challenges have been classified considering their impacts on ED phases and applicability of DS methods, giving a map for future research across the fields. The scoping review shows that to fully take advantage of DS tools there must be an increase in the collaboration between design practitioners and researchers in order to open new data driven opportunities

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
    • …
    corecore