15,057 research outputs found

    Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design

    Get PDF
    Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design

    Computerisation and decision making in neonatal intensive care: a cognitive engineering investigation

    Get PDF
    This paper reports results from a cognitive engineering study that looked at the role of computerised monitoring in neonatal intensive care. A range of methodologies was used: interviews with neonatal staff, ward observations, and experimental techniques. The purpose was to investigate the sources of information used by clinicians when making decisions in the neonatal ICU. It was found that, although it was welcomed by staff, computerised monitoring played a secondary role in the clinicians' decision making (especially for junior and nursing staff) and that staff used the computer less often than indicated by self-reports. Factors that seemed to affect staff use of the computer were the lack (or shortage) of training on the system, the specific clinical conditions involved, and the availability of alternative sources of information. These findings have relevant repercussions for the design of computerised decision support in intensive care and suggest ways in which computerised monitoring can be enhanced, namely: by systematic staff training, by making available online certain types of clinical information, by adapting the user interface, and by developing intelligent algorithms

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Data management and Data Pipelines: An empirical investigation in the embedded systems domain

    Get PDF
    Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach.Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research.Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation.Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines

    Shore-based Voyage Planning

    Get PDF
    The objective of the thesis was to describe the voyage planning process and factors that influence it to see how the process could be adapted for being performed shoreside. The thesis is a qualitative study written from the voyage planning officer’s point of view concentrating on the appraisal and planning stages. Regulatory framework was defined using IMO and British Admiralty publications. Carnival Corporation’s SMS policies and Holland America Line’s voyage planning routines were used as examples of the process. As there is not much research available on voyage planning and new developing technologies, interviews and internet sources were used. The amount of work put into a voyage plan varies greatly depending on a ship type and trade area, but generally it is a time-consuming process, partly because the information needs to be gathered from multiple sources and is not always easily available. The concept of e-navigation is aimed to improve connectivity between different systems and stakeholders allowing new types of services and information dissemination across the industry enabling the navigators to receive relevant information in time and often automatically with no need to request the information separately. Also automated ship-to-ship information exchange will become possible. AI-aided planning software and government provided passage plans can be of assistance in the voyage planning officer’s work, but their scope is still quite limited. In the future when the technology develops, and especially if all information can be accessed from a single window, time spent on appraisal and planning stages will decrease considerably and most of the process could be done shoreside leaving the officers on board more time for other tasks. Autonomous vessels and augmented reality are the future, and as the technology develops shore-based voyage planning will become more common
    corecore