23,425 research outputs found

    URBANO: A Tour-Guide Robot Learning to Make Better Speeches

    Get PDF
    —Thanks to the numerous attempts that are being made to develop autonomous robots, increasingly intelligent and cognitive skills are allowed. This paper proposes an automatic presentation generator for a robot guide, which is considered one more cognitive skill. The presentations are made up of groups of paragraphs. The selection of the best paragraphs is based on a semantic understanding of the characteristics of the paragraphs, on the restrictions defined for the presentation and by the quality criteria appropriate for a public presentation. This work is part of the ROBONAUTA project of the Intelligent Control Research Group at the Universidad Politécnica de Madrid to create "awareness" in a robot guide. The software developed in the project has been verified on the tour-guide robot Urbano. The most important aspect of this proposal is that the design uses learning as the means to optimize the quality of the presentations. To achieve this goal, the system has to perform the optimized decision making, in different phases. The modeling of the quality index of the presentation is made using fuzzy logic and it represents the beliefs of the robot about what is good, bad, or indifferent about a presentation. This fuzzy system is used to select the most appropriate group of paragraphs for a presentation. The beliefs of the robot continue to evolving in order to coincide with the opinions of the public. It uses a genetic algorithm for the evolution of the rules. With this tool, the tour guide-robot shows the presentation, which satisfies the objectives and restrictions, and automatically it identifies the best paragraphs in order to find the most suitable set of contents for every public profil

    FORTEST: Formal methods and testing

    Get PDF
    Formal methods have traditionally been used for specification and development of software. However there are potential benefits for the testing stage as well. The panel session associated with this paper explores the usefulness or otherwise of formal methods in various contexts for improving software testing. A number of different possibilities for the use of formal methods are explored and questions raised. The contributors are all members of the UK FORTEST Network on formal methods and testing. Although the authors generally believe that formal methods are useful in aiding the testing process, this paper is intended to provoke discussion. Dissenters are encouraged to put their views to the panel or individually to the authors

    Soft computing for intelligent data analysis

    Get PDF
    Intelligent data analysis (IDA) is an interdisciplinary study concerned with the effective analysis of data. The paper briefly looks at some of the key issues in intelligent data analysis, discusses the opportunities for soft computing in this context, and presents several IDA case studies in which soft computing has played key roles. These studies are all concerned with complex real-world problem solving, including consistency checking between mass spectral data with proposed chemical structures, screening for glaucoma and other eye diseases, forecasting of visual field deterioration, and diagnosis in an oil refinery involving multivariate time series. Bayesian networks, evolutionary computation, neural networks, and machine learning in general are some of those soft computing techniques effectively used in these studies

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a Pareto converging genetic algorithm

    Get PDF
    Previous work on multiobjective genetic algorithms has been focused on preventing genetic drift and the issue of convergence has been given little attention. In this paper, we present a simple steady-state strategy, Pareto Converging Genetic Algorithm (PCGA), which naturally samples the solution space and ensures population advancement towards the Pareto-front. PCGA eliminates the need for sharing/niching and thus minimizes heuristically chosen parameters and procedures. A systematic approach based on histograms of rank is introduced for assessing convergence to the Pareto-front, which, by definition, is unknown in most real search problems. We argue that there is always a certain inheritance of genetic material belonging to a population, and there is unlikely to be any significant gain beyond some point; a stopping criterion where terminating the computation is suggested. For further encouraging diversity and competition, a nonmigrating island model may optionally be used; this approach is particularly suited to many difficult (real-world) problems, which have a tendency to get stuck at (unknown) local minima. Results on three benchmark problems are presented and compared with those of earlier approaches. PCGA is found to produce diverse sampling of the Pareto-front without niching and with significantly less computational effort

    Fairness Testing: Testing Software for Discrimination

    Full text link
    This paper defines software fairness and discrimination and develops a testing-based method for measuring if and how much software discriminates, focusing on causality in discriminatory behavior. Evidence of software discrimination has been found in modern software systems that recommend criminal sentences, grant access to financial products, and determine who is allowed to participate in promotions. Our approach, Themis, generates efficient test suites to measure discrimination. Given a schema describing valid system inputs, Themis generates discrimination tests automatically and does not require an oracle. We evaluate Themis on 20 software systems, 12 of which come from prior work with explicit focus on avoiding discrimination. We find that (1) Themis is effective at discovering software discrimination, (2) state-of-the-art techniques for removing discrimination from algorithms fail in many situations, at times discriminating against as much as 98% of an input subdomain, (3) Themis optimizations are effective at producing efficient test suites for measuring discrimination, and (4) Themis is more efficient on systems that exhibit more discrimination. We thus demonstrate that fairness testing is a critical aspect of the software development cycle in domains with possible discrimination and provide initial tools for measuring software discrimination.Comment: Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness Testing: Testing Software for Discrimination. In Proceedings of 2017 11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), Paderborn, Germany, September 4-8, 2017 (ESEC/FSE'17). https://doi.org/10.1145/3106237.3106277, ESEC/FSE, 201

    Automated Discovery in Econometrics

    Get PDF
    Our subject is the notion of automated discovery in econometrics. Advances in computer power, electronic communication, and data collection processes have all changed the way econometrics is conducted. These advances have helped to elevate the status of empirical research within the economics profession in recent years and they now open up new possibilities for empirical econometric practice. Of particular significance is the ability to build econometric models in an automated way according to an algorithm of decision rules that allow for (what we call here) heteroskedastic and autocorrelation robust (HAR) inference. Computerized search algorithms may be implemented to seek out suitable models, thousands of regressions and model evaluations may be performed in seconds, statistical inference may be automated according to the properties of the data, and policy decisions can be made and adjusted in real time with the arrival of new data. We discuss some aspects and implications of these exciting, emergent trends in econometrics.Automation, discovery, HAC estimation, HAR inference, model building, online econometrics, policy analysis, prediction, trends
    corecore