129,743 research outputs found

    Spatial Aggregation: Theory and Applications

    Full text link
    Visual thinking plays an important role in scientific reasoning. Based on the research in automating diverse reasoning tasks about dynamical systems, nonlinear controllers, kinematic mechanisms, and fluid motion, we have identified a style of visual thinking, imagistic reasoning. Imagistic reasoning organizes computations around image-like, analogue representations so that perceptual and symbolic operations can be brought to bear to infer structure and behavior. Programs incorporating imagistic reasoning have been shown to perform at an expert level in domains that defy current analytic or numerical methods. We have developed a computational paradigm, spatial aggregation, to unify the description of a class of imagistic problem solvers. A program written in this paradigm has the following properties. It takes a continuous field and optional objective functions as input, and produces high-level descriptions of structure, behavior, or control actions. It computes a multi-layer of intermediate representations, called spatial aggregates, by forming equivalence classes and adjacency relations. It employs a small set of generic operators such as aggregation, classification, and localization to perform bidirectional mapping between the information-rich field and successively more abstract spatial aggregates. It uses a data structure, the neighborhood graph, as a common interface to modularize computations. To illustrate our theory, we describe the computational structure of three implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the spatial aggregation generic operators by mixing and matching a library of commonly used routines.Comment: See http://www.jair.org/ for any accompanying file

    Smart Ultrasound Remote Guidance Experiment (SURGE) Preliminary Findings

    Get PDF
    To date, diagnostic quality ultrasound images were obtained aboard the International Space Station (ISS) using the ultrasound of the Human Research Facility (HRF) rack in the Laboratory module. Through the Advanced Diagnostic Ultrasound in Microgravity (ADUM) and the Braslet-M Occlusion Cuffs (BRASLET SDTO) studies, non-expert ultrasound operators aboard the ISS have performed cardiac, thoracic, abdominal, vascular, ocular, and musculoskeletal ultrasound assessments using remote guidance from ground-based ultrasound experts. With exploration class missions to the lunar and Martian surfaces on the horizon, crew medical officers will necessarily need to operate with greater autonomy given communication delays (round trip times of up to 5 seconds for the Moon and 90 minutes for Mars) and longer periods of communication blackouts (due to orbital constraints of communication assets). The SURGE project explored the feasibility and training requirements of having non-expert ultrasound operators perform autonomous ultrasound assessments in a simulated exploration mission outpost. The project aimed to identify experience, training, and human factors requirements for crew medical officers to perform autonomous ultrasonography. All of these aims pertained to the following risks from the NASA Bioastronautics Road Map: 1) Risk 18: Major Illness and Trauna; 2) Risk 20) Ambulatory Care; 3) Risk 22: Medical Informatics, Technologies, and Support Systems; and 4) Risk 23: Medical Skill Training and Maintenance

    Apparel sizing using trimmed PAM and OWA operators

    Get PDF
    This paper is concerned with apparel sizing system design. One of the most important issues in the apparel development process is to define a sizing system that provides a good fit to the majority of the population. A sizing system classifies a specific population into homogeneous subgroups based on some key body dimensions. Standard sizing systems range linearly from very small to very large. However, anthropometric measures do not grow linearly with size, so they can not accommodate all body types. It is important to determine each class in the sizing system based on a real prototype that is as representative as possible of each class. In this paper we propose a methodology to develop an efficient apparel sizing system based on clustering techniques jointly with OWA operators. Our approach is a natural extension and improvement of the methodology proposed by McCulloch, Paal, and Ashdown (1998), and we apply it to the anthropometric database obtained from a anthropometric survey of the Spanish female population, performed during 2006.This paper has been partially supported by grants TIN2009-14392-C02-01, TIN2009-14392-C02-02, GV/2011/004 and P1.1A2009-02. We would like also to thank the Biomechanics Institute of Valencia for providing us the data set, and to the Spanish "Ministerio de Sanidad y Consumo" for having promoted and coordinated the "Anthropometric Study of the Female Population in Spain".Ibanez, M.; Vinue, G.; Alemany Mut, MS.; Simo, A.; Epifanio, I.; Domingo, J.; Ayala, G. (2012). Apparel sizing using trimmed PAM and OWA operators. Expert Systems with Applications. 39(12):10512-10520. https://doi.org/10.1016/j.eswa.2012.02.127S1051210520391

    PRESISTANT: Learning based assistant for data pre-processing

    Get PDF
    Data pre-processing is one of the most time consuming and relevant steps in a data analysis process (e.g., classification task). A given data pre-processing operator (e.g., transformation) can have positive, negative or zero impact on the final result of the analysis. Expert users have the required knowledge to find the right pre-processing operators. However, when it comes to non-experts, they are overwhelmed by the amount of pre-processing operators and it is challenging for them to find operators that would positively impact their analysis (e.g., increase the predictive accuracy of a classifier). Existing solutions either assume that users have expert knowledge, or they recommend pre-processing operators that are only "syntactically" applicable to a dataset, without taking into account their impact on the final analysis. In this work, we aim at providing assistance to non-expert users by recommending data pre-processing operators that are ranked according to their impact on the final analysis. We developed a tool PRESISTANT, that uses Random Forests to learn the impact of pre-processing operators on the performance (e.g., predictive accuracy) of 5 different classification algorithms, such as J48, Naive Bayes, PART, Logistic Regression, and Nearest Neighbor. Extensive evaluations on the recommendations provided by our tool, show that PRESISTANT can effectively help non-experts in order to achieve improved results in their analytical tasks

    Arctic Standards: Recommendations on Oil Spill Prevention, Response, and Safety in the U.S. Arctic Ocean

    Get PDF
    Oil spilled in Arctic waters would be particularly difficult to remove. Current technology has not been proved to effectively clean up oil when mixed with ice or when trapped under ice. An oil spill would have a profoundly adverse impact on the rich and complex ecosystem found nowhere else in the United States. The Arctic Ocean is home to bowhead, beluga, and gray whales; walruses; polar bears; and other magnificent marine mammals, as well as millions of migratory birds. A healthy ocean is important for these species and integral to the continuation of hunting and fishing traditions practiced by Alaska Native communities for thousands of years.To aid the United States in its efforts to modernize Arctic technology and equipment standards, this report examines the fierce Arctic conditions in which offshore oil and gas operations could take place and then offers a summary of key recommendations for the Interior Department to consider as it develops world-class, Arctic-specific regulatory standards for these activities. Pew's recommendations call for improved technology,equipment, and procedural requirements that match the challenging conditions in the Arctic and for full public participation and transparency throughout the decision-making process. Pew is not opposed to offshore drilling, but a balance must be achieved between responsible energy development and protection of the environment.It is essential that appropriate standards be in place for safety and for oil spill prevention and response in this extreme, remote, and vulnerable ecosystem. This report recommends updating regulations to include Arctic specific requirements and codifying temporary guidance into regulation. The appendixes to this report provide substantially more detail on the report's recommendations, including technical background documentation and additional referenced materials. Please refer to the full set of appendixes for a complete set of recommendations. This report and its appendixes offer guidelines for responsible hydrocarbon development in the U.S. Arctic Ocean

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Hybridation of Bayesian networks and evolutionary algorithms for multi-objective optimization in an integrated product design and project management context

    Get PDF
    A better integration of preliminary product design and project management processes at early steps of system design is nowadays a key industrial issue. Therefore, the aim is to make firms evolve from classical sequential approach (first product design the project design and management) to new integrated approaches. In this paper, a model for integrated product/project optimization is first proposed which allows taking into account simultaneously decisions coming from the product and project managers. However, the resulting model has an important underlying complexity, and a multi-objective optimization technique is required to provide managers with appropriate scenarios in a reasonable amount of time. The proposed approach is based on an original evolutionary algorithm called evolutionary algorithm oriented by knowledge (EAOK). This algorithm is based on the interaction between an adapted evolutionary algorithm and a model of knowledge (MoK) used for giving relevant orientations during the search process. The evolutionary operators of the EA are modified in order to take into account these orientations. The MoK is based on the Bayesian Network formalism and is built both from expert knowledge and from individuals generated by the EA. A learning process permits to update probabilities of the BN from a set of selected individuals. At each cycle of the EA, probabilities contained into the MoK are used to give some bias to the new evolutionary operators. This method ensures both a faster and effective optimization, but it also provides the decision maker with a graphic and interactive model of knowledge linked to the studied project. An experimental platform has been developed to experiment the algorithm and a large campaign of tests permits to compare different strategies as well as the benefits of this novel approach in comparison with a classical EA
    corecore