15,480 research outputs found

    Improving fairness in machine learning systems: What do industry practitioners need?

    Full text link
    The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019

    Reinforcement learning for efficient network penetration testing

    Get PDF
    Penetration testing (also known as pentesting or PT) is a common practice for actively assessing the defenses of a computer network by planning and executing all possible attacks to discover and exploit existing vulnerabilities. Current penetration testing methods are increasingly becoming non-standard, composite and resource-consuming despite the use of evolving tools. In this paper, we propose and evaluate an AI-based pentesting system which makes use of machine learning techniques, namely reinforcement learning (RL) to learn and reproduce average and complex pentesting activities. The proposed system is named Intelligent Automated Penetration Testing System (IAPTS) consisting of a module that integrates with industrial PT frameworks to enable them to capture information, learn from experience, and reproduce tests in future similar testing cases. IAPTS aims to save human resources while producing much-enhanced results in terms of time consumption, reliability and frequency of testing. IAPTS takes the approach of modeling PT environments and tasks as a partially observed Markov decision process (POMDP) problem which is solved by POMDP-solver. Although the scope of this paper is limited to network infrastructures PT planning and not the entire practice, the obtained results support the hypothesis that RL can enhance PT beyond the capabilities of any human PT expert in terms of time consumed, covered attacking vectors, accuracy and reliability of the outputs. In addition, this work tackles the complex problem of expertise capturing and re-use by allowing the IAPTS learning module to store and re-use PT policies in the same way that a human PT expert would learn but in a more efficient way

    A conceptual framework for changes in Fund Management and Accountability relative to ESG issues

    Get PDF
    Major developments in socially responsible investment (SRI) and in environmental, social and governance (ESG) issues for fund managers (FMs) have occurred in the past decade. Much positive change has occurred but problems of disclosure, transparency and accountability remain. This article argues that trustees, FM investors and investee companies all require shared knowledge to overcome, in part, these problems. This involves clear concepts of accountability, and knowledge of fund management and of the associated ‘chain of accountability’ to enhance visibility and transparency. Dealing with the problems also requires development of an analytic framework based on relevant literature and theory. These empirical and analytic constructs combine to form a novel conceptual framework that is used to identify a clear set of areas to change FM investment decision making in a coherent way relative to ESG issues. The constructs and the change strategy are also used together to analyse how one can create favourable conditions for enhanced accountability. Ethical problems and climate change issues will be used as the main examples of ESG issues. The article has policy implications for the UK ‘Stewardship Code’ (2010), the legal responsibilities of key players and for the ‘Carbon Disclosure Project’

    Integration via Meaning: Using the Semantic Web to deliver Web Services

    Get PDF
    Presented at the CRIS2002 Conference in Kassel.-- 9 pages.-- Contains: Conference paper (PDF) + PPT presentation.The major developments of the World Wide Web (WWW) in the last two years have been Web Services and the Semantic Web. The former allows the construction of distributed systems across the WWW by providing a lightweight middleware architecture. The latter provides an infrastructure for accessing resources on the WWW via their relationships with respect to conceptual descriptions. In this paper, I shall review the progress undertaken in each of these two areas. Further, I shall argue that in order for the aims of both the Semantic Web and the Web Services activities to be successful, then the Web Service architecture needs to be augmented by concepts and tools of the Semantic Web. This infrastructure will allow resource discovery, brokering and access to be enabled in a standardised, integrated and interoperable manner. Finally, I survey the CLRC Information Technology R&D programme to show how it is contributing to the development of this future infrastructure

    RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction

    Full text link
    Robots have potential to revolutionize the way we interact with the world around us. One of their largest potentials is in the domain of mobile health where they can be used to facilitate clinical interventions. However, to accomplish this, robots need to have access to our private data in order to learn from these data and improve their interaction capabilities. Furthermore, to enhance this learning process, the knowledge sharing among multiple robot units is the natural step forward. However, to date, there is no well-established framework which allows for such data sharing while preserving the privacy of the users (e.g., the hospital patients). To this end, we introduce RoboChain - the first learning framework for secure, decentralized and computationally efficient data and model sharing among multiple robot units installed at multiple sites (e.g., hospitals). RoboChain builds upon and combines the latest advances in open data access and blockchain technologies, as well as machine learning. We illustrate this framework using the example of a clinical intervention conducted in a private network of hospitals. Specifically, we lay down the system architecture that allows multiple robot units, conducting the interventions at different hospitals, to perform efficient learning without compromising the data privacy.Comment: 7 pages, 6 figure

    The Need to Adapt to New Financial Accounting Technologies Information in the Context of Global Economic Crisis

    Get PDF
    Today, accounting is a necessity and not a desire. Concerns for the improvement of accounting practices are necessary, especially in Romania, where these activities are strengthened with the progress of the Romanian economy integration into the structures of the European Union. This paper carried an objective analysis of how the web report is now being made by financial and accounting information, presents the disadvantages of this approach to reporting introduced, but the potential benefits that could be created by the rapid adoption of international standards for reporting financial information website, too. At the same time, the paper tries to create new opportunities as soon as possible regarding the adoption of intelligent technologies, which, coupled with language Web reporting financial information.economic crisis, web reporting, intelligent financial-accounting systems

    Self-tuning diagnosis of routine alarms in rotating plant items

    Get PDF
    Condition monitoring of rotating plant items in the energy generation industry is often achieved through examination of vibration signals. Engineers use this data to monitor the operation of turbine generators, gas circulators and other key plant assets. A common approach in such monitoring is to trigger an alarm when a vibration deviates from a predefined envelope of normal operation. This limit-based approach, however, generates a large volume of alarms not indicative of system damage or concern, such as operational transients that result in temporary increases in vibration. In the nuclear generation context, all alarms on rotating plant assets must be analysed and subjected to auditable review. The analysis of these alarms is often undertaken manually, on a case- by-case basis, but recent developments in monitoring research have brought forward the use of intelligent systems techniques to automate parts of this process. A knowledge- based system (KBS) has been developed to automatically analyse routine alarms, where the underlying cause can be attributed to observable operational changes. The initialisation and ongoing calibration of such systems, however, is a problem, as normal machine state is not uniform throughout asset life due to maintenance procedures and the wear of components. In addition, different machines will exhibit differing vibro- acoustic dynamics. This paper proposes a self-tuning knowledge-driven analysis system for routine alarm diagnosis across the key rotating plant items within the nuclear context common to the UK. Such a system has the ability to automatically infer the causes of routine alarms, and provide auditable reports to the engineering staff

    From Sensor to Observation Web with Environmental Enablers in the Future Internet

    Get PDF
    This paper outlines the grand challenges in global sustainability research and the objectives of the FP7 Future Internet PPP program within the Digital Agenda for Europe. Large user communities are generating significant amounts of valuable environmental observations at local and regional scales using the devices and services of the Future Internet. These communities’ environmental observations represent a wealth of information which is currently hardly used or used only in isolation and therefore in need of integration with other information sources. Indeed, this very integration will lead to a paradigm shift from a mere Sensor Web to an Observation Web with semantically enriched content emanating from sensors, environmental simulations and citizens. The paper also describes the research challenges to realize the Observation Web and the associated environmental enablers for the Future Internet. Such an environmental enabler could for instance be an electronic sensing device, a web-service application, or even a social networking group affording or facilitating the capability of the Future Internet applications to consume, produce, and use environmental observations in cross-domain applications. The term ?envirofied? Future Internet is coined to describe this overall target that forms a cornerstone of work in the Environmental Usage Area within the Future Internet PPP program. Relevant trends described in the paper are the usage of ubiquitous sensors (anywhere), the provision and generation of information by citizens, and the convergence of real and virtual realities to convey understanding of environmental observations. The paper addresses the technical challenges in the Environmental Usage Area and the need for designing multi-style service oriented architecture. Key topics are the mapping of requirements to capabilities, providing scalability and robustness with implementing context aware information retrieval. Another essential research topic is handling data fusion and model based computation, and the related propagation of information uncertainty. Approaches to security, standardization and harmonization, all essential for sustainable solutions, are summarized from the perspective of the Environmental Usage Area. The paper concludes with an overview of emerging, high impact applications in the environmental areas concerning land ecosystems (biodiversity), air quality (atmospheric conditions) and water ecosystems (marine asset management)

    Discussant\u27s response to Expert systems and AI-based decision support in auditing: Progress and perspectives

    Get PDF
    https://egrove.olemiss.edu/dl_proceedings/1084/thumbnail.jp
    corecore