33 research outputs found

    Survey on Insurance Claim analysis using Natural Language Processing and Machine Learning

    Get PDF
    In the insurance industry nowadays, data is carrying the major asset and playing a key role. There is a wealth of information available to insurance transporters nowadays. We can identify three major eras in the insurance industry's more than 700-year history. The industry follows the manual era from the 15th century to 1960, the systems era from 1960 to 2000, and the current digital era, i.e., 2001-20X0. The core insurance sector has been decided by trusting data analytics and implementing new technologies to improve and maintain existing practices and maintain capital together. This has been the highest corporate object in all three periods.AI techniques have been progressively utilized for a variety of insurance activities in recent years. In this study, we give a comprehensive general assessment of the existing research that incorporates multiple artificial intelligence (AI) methods into all essential insurance jobs. Our work provides a more comprehensive review of this research, even if there have already been a number of them published on the topic of using artificial intelligence for certain insurance jobs. We study algorithms for learning, big data, block chain, data mining, and conversational theory, and their applications in insurance policy, claim prediction, risk estimation, and other fields in order to comprehensively integrate existing work in the insurance sector using AI approaches

    Multimodal analysis of disinformation and misinformation

    Get PDF
    The use of disinformation and misinformation campaigns in the media has attracted much attention from academics and policy-makers. Multimodal analysis or the analysis of two or more semiotic systems—language, gestures, images, sounds, among others—in their interrelation and interaction is essential to understanding dis-/misinformation efforts because most human communication goes beyond just words. There is a confluence of many disciplines (e.g. computer science, linguistics, political science, communication studies) that are developing methods and analytical models of multimodal communication. This literature review brings research strands from these disciplines together, providing a map of the multi- and interdisciplinary landscape for multimodal analysis of dis-/misinformation. It records the substantial growth starting from the second quarter of 2020—the start of the COVID-19 epidemic in Western Europe—in the number of studies on multimodal dis-/misinformation coming from the field of computer science. The review examines that category of studies in more detail. Finally, the review identifies gaps in multimodal research on dis-/misinformation and suggests ways to bridge these gaps including future cross-disciplinary research directions. Our review provides scholars from different disciplines working on dis-/misinformation with a much needed bird's-eye view of the rapidly emerging research of multimodal dis-/misinformation

    MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities

    Full text link
    In this paper, we introduce the MLM (Multiple Languages and Modalities) dataset - a new resource to train and evaluate multitask systems on samples in multiple modalities and three languages. The generation process and inclusion of semantic data provide a resource that further tests the ability for multitask systems to learn relationships between entities. The dataset is designed for researchers and developers who build applications that perform multiple tasks on data encountered on the web and in digital archives. A second version of MLM provides a geo-representative subset of the data with weighted samples for countries of the European Union. We demonstrate the value of the resource in developing novel applications in the digital humanities with a motivating use case and specify a benchmark set of tasks to retrieve modalities and locate entities in the dataset. Evaluation of baseline multitask and single task systems on the full and geo-representative versions of MLM demonstrate the challenges of generalising on diverse data. In addition to the digital humanities, we expect the resource to contribute to research in multimodal representation learning, location estimation, and scene understanding

    Exploring Hardware Fault Impacts on Different Real Number Representations of the Structural Resilience of TCUs in GPUs

    Get PDF
    The most recent generations of graphics processing units (GPUs) boost the execution of convolutional operations required by machine learning applications by resorting to specialized and efficient in-chip accelerators (Tensor Core Units or TCUs) that operate on matrix multiplication tiles. Unfortunately, modern cutting-edge semiconductor technologies are increasingly prone to hardware defects, and the trend to highly stress TCUs during the execution of safety-critical and high-performance computing (HPC) applications increases the likelihood of TCUs producing different kinds of failures. In fact, the intrinsic resiliency to hardware faults of arithmetic units plays a crucial role in safety-critical applications using GPUs (e.g., in automotive, space, and autonomous robotics). Recently, new arithmetic formats have been proposed, particularly those suited to neural network execution. However, the reliability characterization of TCUs supporting different arithmetic formats was still lacking. In this work, we quantitatively assessed the impact of hardware faults in TCU structures while employing two distinct formats (floating-point and posit) and using two different configurations (16 and 32 bits) to represent real numbers. For the experimental evaluation, we resorted to an architectural description of a TCU core (PyOpenTCU) and performed 120 fault simulation campaigns, injecting around 200,000 faults per campaign and requiring around 32 days of computation. Our results demonstrate that the posit format of TCUs is less affected by faults than the floating-point one (by up to three orders of magnitude for 16 bits and up to twenty orders for 32 bits). We also identified the most sensible fault locations (i.e., those that produce the largest errors), thus paving the way to adopting smart hardening solutions

    Internet of Mirrors for Connected Healthcare and Beauty: A Prospective Vision

    Full text link
    With the shift towards smart objects and automated services in many industries, the health and beauty industries are also becoming increasingly involved in AI-driven smart systems. There is a rising market demand for personalised services and a need for unified platforms in many sectors, specifically the cosmetics and healthcare industries. Alongside this rising demand, there are two major gaps when considering the integration of autonomous systems within these sectors. Firstly, the existing smart systems in the cosmetics industry are limited to single-purpose products and the employed technologies are not widespread enough to support the growing consumer demand for personalisation. Secondly, despite the rise of smart devices in healthcare, the current state-of-the-art services do not fulfil the accessibility demands and holistic nature of healthcare. To bridge these gaps, we propose integrating autonomous systems with health and beauty services through a unified visual platform coined as the Internet-of-Mirrors (IoM), an interconnected system of smart mirrors with sensing and communication capabilities where the smart mirror functions as an immersive visual dashboard to provide personalised services for health and beauty consultations and routines. We aim to present an overview of current state-of-the-art technologies that will enable the development of the IoM as well as provide a practical vision of this system with innovative scenarios to give a forward-looking vision for assistive technologies. We also discuss the missing capabilities and challenges the development of the IoM would face and outline future research directions that will support the realisation of our proposed framework.Comment: 21 pages, 6 figure

    A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness

    Full text link
    People increasingly use videos on the Web as a source for learning. To support this way of learning, researchers and developers are continuously developing tools, proposing guidelines, analyzing data, and conducting experiments. However, it is still not clear what characteristics a video should have to be an effective learning medium. In this paper, we present a comprehensive review of 257 articles on video-based learning for the period from 2016 to 2021. One of the aims of the review is to identify the video characteristics that have been explored by previous work. Based on our analysis, we suggest a taxonomy which organizes the video characteristics and contextual aspects into eight categories: (1) audio features, (2) visual features, (3) textual features, (4) instructor behavior, (5) learners activities, (6) interactive features (quizzes, etc.), (7) production style, and (8) instructional design. Also, we identify four representative research directions: (1) proposals of tools to support video-based learning, (2) studies with controlled experiments, (3) data analysis studies, and (4) proposals of design guidelines for learning videos. We find that the most explored characteristics are textual features followed by visual features, learner activities, and interactive features. Text of transcripts, video frames, and images (figures and illustrations) are most frequently used by tools that support learning through videos. The learner activity is heavily explored through log files in data analysis studies, and interactive features have been frequently scrutinized in controlled experiments. We complement our review by contrasting research findings that investigate the impact of video characteristics on the learning effectiveness, report on tasks and technologies used to develop tools that support learning, and summarize trends of design guidelines to produce learning video

    A scientometric analysis of deep learning approaches for detecting Fake News

    Get PDF
    The unregulated proliferation of counterfeit news creation and dissemination that has been seen in recent years poses a constant threat to democracy. Fake news articles have the power to persuade individuals, leaving them perplexed. This scientometric study examined 569 documents from the Scopus database between 2012 and mid-2022 to look for general research trends, publication and citation structures, authorship and collaboration patterns, bibliographic coupling, and productivity patterns in order to identify fake news using deep learning. For this study, Biblioshiny and VOSviewer were used. The findings of this study clearly demonstrate a trend toward an increase in publications since 2016, and this dissemination of fake news is still an issue from a global perspective. Thematic analysis of papers reveals that research topics related to social media for surveillance and monitoring of public attitudes and perceptions, as well as fake news, are crucial but underdeveloped, while studies on deep fake detection, digital contents, digital forensics, and computer vision constitute niche areas. Furthermore, the results show that China and the USA have the strongest international collaboration, despite India writing more articles. This paper also examines the current state of the art in deep learning techniques for fake news detection, with the goal of providing a potential roadmap for researchers interested in undertaking research in this fiel
    corecore