156 research outputs found

    A survey on tidal analysis and forecasting methods for Tsunami detection

    Get PDF
    Accurate analysis and forecasting of tidal level are very important tasks for human activities in oceanic and coastal areas. They can be crucial in catastrophic situations like occurrences of Tsunamis in order to provide a rapid alerting to the human population involved and to save lives. Conventional tidal forecasting methods are based on harmonic analysis using the least squares method to determine harmonic parameters. However, a large number of parameters and long-term measured data are required for precise tidal level predictions with harmonic analysis. Furthermore, traditional harmonic methods rely on models based on the analysis of astronomical components and they can be inadequate when the contribution of non-astronomical components, such as the weather, is significant. Other alternative approaches have been developed in the literature in order to deal with these situations and provide predictions with the desired accuracy, with respect also to the length of the available tidal record. These methods include standard high or band pass filtering techniques, although the relatively deterministic character and large amplitude of tidal signals make special techniques, like artificial neural networks and wavelets transform analysis methods, more effective. This paper is intended to provide the communities of both researchers and practitioners with a broadly applicable, up to date coverage of tidal analysis and forecasting methodologies that have proven to be successful in a variety of circumstances, and that hold particular promise for success in the future. Classical and novel methods are reviewed in a systematic and consistent way, outlining their main concepts and components, similarities and differences, advantages and disadvantages

    Ensembling classical machine learning and deep learning approaches for morbidity identification from clinical notes

    Get PDF
    The past decade has seen an explosion of the amount of digital information generated within the healthcare domain. Digital data exist in the form of images, video, speech, transcripts, electronic health records, clinical records, and free-text. Analysis and interpretation of healthcare data is a daunting task, and it demands a great deal of time, resources, and human effort. In this paper, we focus on the problem of co-morbidity recognition from patient’s clinical records. To this aim, we employ both classical machine learning and deep learning approaches.We use word embeddings and bag-of-words representations, coupled with feature selection techniques. The goal of our work is to develop a classification system to identify whether a certain health condition occurs for a patient by studying his/her past clinical records. In more detail, we have used pre-trained word2vec, domain-trained, GloVe, fastText, and universal sentence encoder embeddings to tackle the classification of sixteen morbidity conditions within clinical records. We have compared the outcomes of classical machine learning and deep learning approaches with the employed feature representation methods and feature selection methods. We present a comprehensive discussion of the performances and behaviour of the employed classical machine learning and deep learning approaches. Finally, we have also used ensemble learning techniques over a large number of combinations of classifiers to improve the single model performance. For our experiments, we used the n2c2 natural language processing research dataset, released by Harvard Medical School. The dataset is in the form of clinical notes that contain patient discharge summaries. Given the unbalancedness of the data and their small size, the experimental results indicate the advantage of the ensemble learning technique with respect to single classifier models. In particular, the ensemble learning technique has slightly improved the performances of single classification models but has greatly reduced the variance of predictions stabilizing the accuracies (i.e., the lower standard deviation in comparison with single classifiers). In real-life scenarios, our work can be employed to identify with high accuracy morbidity conditions of patients by feeding our tool with their current clinical notes. Moreover, other domains where classification is a common problem might benefit from our approach as well

    NetFPGA Hardware Modules for Input, Output and EWMA Bit-Rate Computation

    Get PDF
    NetFPGA is a hardware board that it is becoming increasingly popular in various research areas. It is a hardware customizable router and it can be used to study, implement and test new protocols and techniques directly in hardware. It allows researchers to experience a more real experiment environment. In this paper we present a work about the design and development of four new modules built on top of the NetFPGA Reference Router design. In particular, they compute the input and output bit rate run time and provide an estimation of the input bit rate based on an EWMA filter. Moreover we extended the rate limiter module which is embedded within the output queues in order to test our improved Reference Router. Along the paper we explain in detail each module as far as the architecture and the implementation are concerned. Furthermore, we created a testing environment which show the effectiveness and effciency of our module

    TF-IDF vs word embeddings for morbidity identification in clinical notes: An initial study

    Get PDF
    Today, we are seeing an ever-increasing number of clinical notes that contain clinical results, images, and textual descriptions of patient's health state. All these data can be analyzed and employed to cater novel services that can help people and domain experts with their common healthcare tasks. However, many technologies such as Deep Learning and tools like Word Embeddings have started to be investigated only recently, and many challenges remain open when it comes to healthcare domain applications. To address these challenges, we propose the use of Deep Learning and Word Embeddings for identifying sixteen morbidity types within textual descriptions of clinical records. For this purpose, we have used a Deep Learning model based on Bidirectional Long-Short Term Memory (LSTM) layers which can exploit state-of-the-art vector representations of data such as Word Embeddings. We have employed pre-trained Word Embeddings namely GloVe and Word2Vec, and our own Word Embeddings trained on the target domain. Furthermore, we have compared the performances of the deep learning approaches against the traditional tf-idf using Support Vector Machine and Multilayer perceptron (our baselines). From the obtained results it seems that the latter outperform the combination of Deep Learning approaches using any word embeddings. Our preliminary results indicate that there are specific features that make the dataset biased in favour of traditional machine learning approaches

    Munchausen by internet: current research and future directions.

    Get PDF
    The Internet has revolutionized the health world, enabling self-diagnosis and online support to take place irrespective of time or location. Alongside the positive aspects for an individual's health from making use of the Internet, debate has intensified on how the increasing use of Web technology might have a negative impact on patients, caregivers, and practitioners. One such negative health-related behavior is Munchausen by Internet

    Protons in near earth orbit

    Get PDF
    The proton spectrum in the kinetic energy range 0.1 to 200 GeV was measured by the Alpha Magnetic Spectrometer (AMS) during space shuttle flight STS-91 at an altitude of 380 km. Above the geomagnetic cutoff the observed spectrum is parameterized by a power law. Below the geomagnetic cutoff a substantial second spectrum was observed concentrated at equatorial latitudes with a flux ~ 70 m^-2 sec^-1 sr^-1. Most of these second spectrum protons follow a complicated trajectory and originate from a restricted geographic region.Comment: 19 pages, Latex, 7 .eps figure

    Search for antihelium in cosmic rays

    Get PDF
    The Alpha Magnetic Spectrometer (AMS) was flown on the space shuttle Discovery during flight STS-91 in a 51.7 degree orbit at altitudes between 320 and 390 km. A total of 2.86 * 10^6 helium nuclei were observed in the rigidity range 1 to 140 GV. No antihelium nuclei were detected at any rigidity. An upper limit on the flux ratio of antihelium to helium of < 1.1 * 10^-6 is obtained.Comment: 18 pages, Latex, 9 .eps figure

    Extending vector hysteresis operators

    No full text
    In some recent papers we studied how to extend to BV a hysteresis operator defined on Lipschitzian inputs, preserving suitable continuity properties. More precisely we considered the so called strict metric defined by means of the essential variation. This approach may have some drawbacks from the physical point of view, therefore in the present paper we show how to extend a general hysteresis operator with respect to a notion of convergence which takes into account of the pointwise variation rather than the essential variation
    • 

    corecore