2,456 research outputs found

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    The Maritime Domain Awareness Center– A Human-Centered Design Approach

    Get PDF
    This paper contends that Maritime Domain Awareness Center (MDAC) design should be a holistic approach integrating established knowledge about human factors, decision making, cognitive tasks, complexity science, and human information interaction. The design effort should not be primarily a technology effort that focuses on computer screens, information feeds, display technologies, or user interfaces. The existence of a room with access to vast amounts of information and wall-to-wall video screens of ships, aircraft, weather data, and other regional information does not necessarily correlate to possessing situation awareness. Fundamental principles of human-centered information design should guide MDAC design and technology selection, and it is imperative that they be addressed early in system development. The design approach should address the reason and purpose for a given MDAC. Subsequent design efforts should address ergonomic interaction with information – the relationship of the brain to the information ecosystem provided by the MDAC, and the cognitive science of situation awareness and decision making. This understanding will guide technology functionality. The system user and decision maker should be the focus of the information design specifications, and this user population must participate and influence the information design. Accordingly, this paper provides a “design gestalt” by which to approach the design and development of an MDAC

    Applications of Artificial Intelligence in Military Training Simulation

    Get PDF
    This report is a survey of Artificial Intelligence (AI) technology contributions to military training. It provides an overview of military training simulation and a review of instructional problems and challenges which can be addressed by AI. The survey includes current as well as potential applications of AI, with particular emphasis on design and system integration issues. Applications include knowledge and skills training in strategic planning and decision making, tactical warfare operations, electronics maintenance and repair, as well as computer-aided design of training systems. The report describes research contributions in the application of AI technology to the training world, and it concludes with an assessment of future research directions in this area

    BeneWinD: An Adaptive Benefit Win–Win Platform with Distributed Virtual Emotion Foundation

    Get PDF
    In recent decades, online platforms that use Web 3.0 have tremendously expanded their goods, services, and values to numerous applications thanks to its inherent advantages of convenience, service speed, connectivity, etc. Although online commerce and other relevant platforms have clear merits, offline-based commerce and payments are indispensable and should be activated continuously, because offline systems have intrinsic value for people. With the theme of benefiting all humankind, we propose a new adaptive benefit platform, called BeneWinD, which is endowed with strengths of online and offline platforms. Furthermore, a new currency for integrated benefits, the win–win digital currency, is used in the proposed platform. Essentially, the proposed platform with a distributed virtual emotion foundation aims to provide a wide scope of benefits to both parties, the seller and consumer, in online and offline settings. We primarily introduce features, applicable scenarios, and services of the proposed platform. Different from previous systems and perspectives, BeneWinD can be combined with Web 3.0 because it deliberates based on the decentralized or distributed virtual emotion foundation, and the virtual emotion feature and the detected virtual emotion information with anonymity are open to everyone who wants to participate in the platform. It follows that the BeneWinD platform can be connected to the linked virtual emotion data block or win–win digital currency. Furthermore, crucial research challenges and issues are addressed in order to make great contributions to improve the development of the platform

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    An evaluation of the Microsoft HoloLens for a manufacturing-guided assembly task

    Get PDF
    Many studies have confirmed the benefits of using Augmented Reality (AR) work instructions over traditional digital or paper instructions, but few have compared the effects of different AR hardware for complex assembly tasks. For this research, previously published data using Desktop Model Based Instructions (MBI), Tablet MBI, and Tablet AR instructions were compared to new assembly data collected using AR instructions on the Microsoft HoloLens Head Mounted Display (HMD). Participants completed a mock wing assembly task, and measures like completion time, error count, Net Promoter Score, and qualitative feedback were recorded. The HoloLens condition yielded faster completion times than all other conditions. HoloLens users also had lower error rates than those who used the non-AR conditions. Despite the performance benefits of the HoloLens AR instructions, users of this condition reported lower net promoter scores than users of the Tablet AR instructions. The qualitative data showed that some users thought the HoloLens device was uncomfortable and that the tracking was not always exact. Although the user feedback favored the Tablet AR condition, the HoloLens condition resulted in significantly faster assembly times. As a result, it is recommended to use the HoloLens for complex guided assembly instructions with minor changes, such as allowing the user to toggle the AR instructions on and off at will. The results of this paper can help manufacturing stakeholders better understand the benefits of different AR technology for manual assembly tasks

    Personalized face and gesture analysis using hierarchical neural networks

    Full text link
    The video-based computational analyses of human face and gesture signals encompass a myriad of challenging research problems involving computer vision, machine learning and human computer interaction. In this thesis, we focus on the following challenges: a) the classification of hand and body gestures along with the temporal localization of their occurrence in a continuous stream, b) the recognition of facial expressivity levels in people with Parkinson's Disease using multimodal feature representations, c) the prediction of student learning outcomes in intelligent tutoring systems using affect signals, and d) the personalization of machine learning models, which can adapt to subject and group-specific nuances in facial and gestural behavior. Specifically, we first conduct a quantitative comparison of two approaches to the problem of segmenting and classifying gestures on two benchmark gesture datasets: a method that simultaneously segments and classifies gestures versus a cascaded method that performs the tasks sequentially. Second, we introduce a framework that computationally predicts an accurate score for facial expressivity and validate it on a dataset of interview videos of people with Parkinson's disease. Third, based on a unique dataset of videos of students interacting with MathSpring, an intelligent tutoring system, collected by our collaborative research team, we build models to predict learning outcomes from their facial affect signals. Finally, we propose a novel solution to a relatively unexplored area in automatic face and gesture analysis research: personalization of models to individuals and groups. We develop hierarchical Bayesian neural networks to overcome the challenges posed by group or subject-specific variations in face and gesture signals. We successfully validate our formulation on the problems of personalized subject-specific gesture classification, context-specific facial expressivity recognition and student-specific learning outcome prediction. We demonstrate the flexibility of our hierarchical framework by validating the utility of both fully connected and recurrent neural architectures

    Technology Applications Team: Applications of aerospace technology

    Get PDF
    Highlights of the Research Triangle Institute (RTI) Applications Team activities over the past quarter are presented in Section 1.0. The Team's progress in fulfilling the requirements of the contract is summarized in Section 2.0. In addition to our market-driven approach to applications project development, RTI has placed increased effort on activities to commercialize technologies developed at NASA Centers. These Technology Commercialization efforts are summarized in Section 3.0. New problem statements prepared by the Team in the reporting period are presented in Section 4.0. The Team's transfer activities for ongoing projects with the NASA Centers are presented in Section 5.0. Section 6.0 summarizes the status of four add-on tasks. Travel for the reporting period is described in Section 7.0. The RTI Team staff and consultants and their project responsibilities are listed in Appendix A. The authors gratefully acknowledge the contributions of many individuals to the RTI Technology Applications Team program. The time and effort contributed by managers, engineers, and scientists throughout NASA were essential to program success. Most important to the program has been a productive working relationship with the NASA Field Center Technology Utilization (TU) Offices. The RTI Team continues to strive for improved effectiveness as a resource to these offices. Industry managers, technical staff, medical researchers, and clinicians have been cooperative and open in their participation. The RTI Team looks forward to continuing expansion of its interaction with U.S. industry to facilitate the transfer of aerospace technology to the private sector

    Multimodal interaction for deliberate practice

    Get PDF
    • …
    corecore