18,287 research outputs found

    Towards Error Handling in a DSL for Robot Assembly Tasks

    Full text link
    This work-in-progress paper presents our work with a domain specific language (DSL) for tackling the issue of programming robots for small-sized batch production. We observe that as the complexity of assembly increases so does the likelihood of errors, and these errors need to be addressed. Nevertheless, it is essential that programming and setting up the assembly remains fast, allows quick changeovers, easy adjustments and reconfigurations. In this paper we present an initial design and implementation of extending an existing DSL for assembly operations with error specification, error handling and advanced move commands incorporating error tolerance. The DSL is used as part of a framework that aims at tackling uncertainties through a probabilistic approach.Comment: Presented at DSLRob 2014 (arXiv:cs/1411.7148

    An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks

    Full text link
    QoS identification for untrustworthy Web services is critical in QoS management in the service computing since the performance of untrustworthy Web services may result in QoS downgrade. The key issue is to intelligently learn the characteristics of trustworthy Web services from different QoS levels, then to identify the untrustworthy ones according to the characteristics of QoS metrics. As one of the intelligent identification approaches, deep neural network has emerged as a powerful technique in recent years. In this paper, we propose a novel two-phase neural network model to identify the untrustworthy Web services. In the first phase, Web services are collected from the published QoS dataset. Then, we design a feedforward neural network model to build the classifier for Web services with different QoS levels. In the second phase, we employ a probabilistic neural network (PNN) model to identify the untrustworthy Web services from each classification. The experimental results show the proposed approach has 90.5% identification ratio far higher than other competing approaches.Comment: 8 pages, 5 figure

    Miniature mobile sensor platforms for condition monitoring of structures

    Get PDF
    In this paper, a wireless, multisensor inspection system for nondestructive evaluation (NDE) of materials is described. The sensor configuration enables two inspection modes-magnetic (flux leakage and eddy current) and noncontact ultrasound. Each is designed to function in a complementary manner, maximizing the potential for detection of both surface and internal defects. Particular emphasis is placed on the generic architecture of a novel, intelligent sensor platform, and its positioning on the structure under test. The sensor units are capable of wireless communication with a remote host computer, which controls manipulation and data interpretation. Results are presented in the form of automatic scans with different NDE sensors in a series of experiments on thin plate structures. To highlight the advantage of utilizing multiple inspection modalities, data fusion approaches are employed to combine data collected by complementary sensor systems. Fusion of data is shown to demonstrate the potential for improved inspection reliability

    A Multiple Classifier System Identifies Novel Cannabinoid CB2 Receptor Ligands

    Get PDF
    open access articleDrugs have become an essential part of our lives due to their ability to improve people’s health and quality of life. However, for many diseases, approved drugs are not yet available or existing drugs have undesirable side effects, making the pharmaceutical industry strive to discover new drugs and active compounds. The development of drugs is an expensive process, which typically starts with the detection of candidate molecules (screening) for an identified protein target. To this end, the use of high-performance screening techniques has become a critical issue in order to palliate the high costs. Therefore, the popularity of computer-based screening (often called virtual screening or in-silico screening) has rapidly increased during the last decade. A wide variety of Machine Learning (ML) techniques has been used in conjunction with chemical structure and physicochemical properties for screening purposes including (i) simple classifiers, (ii) ensemble methods, and more recently (iii) Multiple Classifier Systems (MCS). In this work, we apply an MCS for virtual screening (D2-MCS) using circular fingerprints. We applied our technique to a dataset of cannabinoid CB2 ligands obtained from the ChEMBL database. The HTS collection of Enamine (1.834.362 compounds), was virtually screened to identify 48.432 potential active molecules using D2-MCS. This list was subsequently clustered based on circular fingerprints and from each cluster, the most active compound was maintained. From these, the top 60 were kept, and 21 novel compounds were purchased. Experimental validation confirmed six highly active hits (>50% displacement at 10 μM and subsequent Ki determination) and an additional five medium active hits (>25% displacement at 10 μM). D2-MCS hence provided a hit rate of 29% for highly active compounds and an overall hit rate of 52%

    Developing a global risk engine

    Get PDF
    Risk analysis is a critical link in the reduction of casualties and damages due to earthquakes. Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible risk assessment software. However, there is a significant disparity between the high quality scientific data developed by researchers and the availability of versatile, open and user-friendly risk analysis tools to meet the demands of end-users. In the past few years several open-source software have been developed that play an important role in the seismic research, such as OpenSHA and OpenSEES. There is however still a gap when it comes to open-source risk assessment tools and software. In order to fill this gap, the Global Earthquake Model (GEM) has been created. GEM is an internationally sanctioned program initiated by the OECD that aims to build independent, open standards to calculate and communicate earthquake risk around the world. This initiative started with a one-year pilot project named GEM1, during which an evaluation of a number of existing risk software was carried out. After a critical review of the results it was concluded that none of the software were adequate for GEM requirements and therefore, a new object-oriented tool was to be developed. This paper presents a summary of some of the most well known applications used in risk analysis, highlighting the main aspects that were considered for the development of this risk platform. The research that was carried out in order to gather all of the necessary information to build this tool was distributed in four different areas: information technology approach, seismic hazard resources, vulnerability assessment methodologies and sources of exposure data. The main aspects and findings for each of these areas will be presented as well as how these features were incorporated in the up-to-date risk engine. Currently, the risk engine is capable of predicting human or economical losses worldwide considering both deterministic and probabilistic-based events, using vulnerability curves. A first version of GEM will become available at the end of 2013. Until then the risk engine will continue to be developed by a growing community of developers, using a dedicated open-source platform

    Neural network architectures to analyze OPAD data

    Get PDF
    A prototype Optical Plume Anomaly Detection (OPAD) system is now installed on the space shuttle main engine (SSME) Technology Test Bed (TTB) at MSFC. The OPAD system requirements dictate the need for fast, efficient data processing techniques. To address this need of the OPAD system, a study was conducted into how artificial neural networks could be used to assist in the analysis of plume spectral data
    • …
    corecore