5,399 research outputs found

    I hear you eat and speak: automatic recognition of eating condition and food type, use-cases, and impact on ASR performance

    Get PDF
    We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient

    Application of artificial intelligence in the dental field : A literature review

    Get PDF
    Purpose: The purpose of this study was to comprehensively review the literature regarding the application of artificial intelligence (AI) in the dental field, focusing on the evaluation criteria and architecture types. Study selection: Electronic databases (PubMed, Cochrane Library, Scopus) were searched. Full-text articles describing the clinical application of AI for the detection, diagnosis, and treatment of lesions and the AI method/architecture were included. Results: The primary search presented 422 studies from 1996 to 2019, and 58 studies were finally selected. Regarding the year of publication, the oldest study, which was reported in 1996, focused on “oral and maxillofacial surgery.” Machine-learning architectures were employed in the selected studies, while approximately half of them (29/58) employed neural networks. Regarding the evaluation criteria, eight studies compared the results obtained by AI with the diagnoses formulated by dentists, while several studies compared two or more architectures in terms of performance. The following parameters were employed for evaluating the AI performance: accuracy, sensitivity, specificity, mean absolute error, root mean squared error, and area under the receiver operating characteristic curve. Conclusion: Application of AI in the dental field has progressed; however, the criteria for evaluating the efficacy of AI have not been clarified. It is necessary to obtain better quality data for machine learning to achieve the effective diagnosis of lesions and suitable treatment planning

    Preface

    Get PDF
    DAMSS-2018 is the jubilee 10th international workshop on data analysis methods for software systems, organized in Druskininkai, Lithuania, at the end of the year. The same place and the same time every year. Ten years passed from the first workshop. History of the workshop starts from 2009 with 16 presentations. The idea of such workshop came up at the Institute of Mathematics and Informatics. Lithuanian Academy of Sciences and the Lithuanian Computer Society supported this idea. This idea got approval both in the Lithuanian research community and abroad. The number of this year presentations is 81. The number of registered participants is 113 from 13 countries. In 2010, the Institute of Mathematics and Informatics became a member of Vilnius University, the largest university of Lithuania. In 2017, the institute changes its name into the Institute of Data Science and Digital Technologies. This name reflects recent activities of the institute. The renewed institute has eight research groups: Cognitive Computing, Image and Signal Analysis, Cyber-Social Systems Engineering, Statistics and Probability, Global Optimization, Intelligent Technologies, Education Systems, Blockchain Technologies. The main goal of the workshop is to introduce the research undertaken at Lithuanian and foreign universities in the fields of data science and software engineering. Annual organization of the workshop allows the fast interchanging of new ideas among the research community. Even 11 companies supported the workshop this year. This means that the topics of the workshop are actual for business, too. Topics of the workshop cover big data, bioinformatics, data science, blockchain technologies, deep learning, digital technologies, high-performance computing, visualization methods for multidimensional data, machine learning, medical informatics, ontological engineering, optimization in data science, business rules, and software engineering. Seeking to facilitate relations between science and business, a special session and panel discussion is organized this year about topical business problems that may be solved together with the research community. This book gives an overview of all presentations of DAMSS-2018.DAMSS-2018 is the jubilee 10th international workshop on data analysis methods for software systems, organized in Druskininkai, Lithuania, at the end of the year. The same place and the same time every year. Ten years passed from the first workshop. History of the workshop starts from 2009 with 16 presentations. The idea of such workshop came up at the Institute of Mathematics and Informatics. Lithuanian Academy of Sciences and the Lithuanian Computer Society supported this idea. This idea got approval both in the Lithuanian research community and abroad. The number of this year presentations is 81. The number of registered participants is 113 from 13 countries. In 2010, the Institute of Mathematics and Informatics became a member of Vilnius University, the largest university of Lithuania. In 2017, the institute changes its name into the Institute of Data Science and Digital Technologies. This name reflects recent activities of the institute. The renewed institute has eight research groups: Cognitive Computing, Image and Signal Analysis, Cyber-Social Systems Engineering, Statistics and Probability, Global Optimization, Intelligent Technologies, Education Systems, Blockchain Technologies. The main goal of the workshop is to introduce the research undertaken at Lithuanian and foreign universities in the fields of data science and software engineering. Annual organization of the workshop allows the fast interchanging of new ideas among the research community. Even 11 companies supported the workshop this year. This means that the topics of the workshop are actual for business, too. Topics of the workshop cover big data, bioinformatics, data science, blockchain technologies, deep learning, digital technologies, high-performance computing, visualization methods for multidimensional data, machine learning, medical informatics, ontological engineering, optimization in data science, business rules, and software engineering. Seeking to facilitate relations between science and business, a special session and panel discussion is organized this year about topical business problems that may be solved together with the research community. This book gives an overview of all presentations of DAMSS-2018

    Artificial intelligence methodologies and their application to diabetes

    Get PDF
    In the past decade diabetes management has been transformed by the addition of continuous glucose monitoring and insulin pump data. More recently, a wide variety of functions and physiologic variables, such as heart rate, hours of sleep, number of steps walked and movement, have been available through wristbands or watches. New data, hydration, geolocation, and barometric pressure, among others, will be incorporated in the future. All these parameters, when analyzed, can be helpful for patients and doctors' decision support. Similar new scenarios have appeared in most medical fields, in such a way that in recent years, there has been an increased interest in the development and application of the methods of artificial intelligence (AI) to decision support and knowledge acquisition. Multidisciplinary research teams integrated by computer engineers and doctors are more and more frequent, mirroring the need of cooperation in this new topic. AI, as a science, can be defined as the ability to make computers do things that would require intelligence if done by humans. Increasingly, diabetes-related journals have been incorporating publications focused on AI tools applied to diabetes. In summary, diabetes management scenarios have suffered a deep transformation that forces diabetologists to incorporate skills from new areas. This recently needed knowledge includes AI tools, which have become part of the diabetes health care. The aim of this article is to explain in an easy and plane way the most used AI methodologies to promote the implication of health care providers?doctors and nurses?in this field

    A Machine Learning Decision Support System (DSS) for Neuroendocrine Tumor Patients Treated with Somatostatin Analog (SSA) Therapy

    Get PDF
    The application of machine learning (ML) techniques could facilitate the identification of predictive biomarkers of somatostatin analog (SSA) efficacy in patients with neuroendocrine tumors (NETs). We collected data from 74 patients with a pancreatic or gastrointestinal NET who received SSA as first-line therapy. We developed three classification models to predict whether the patient would experience a progressive disease (PD) after 12 or 18 months based on clinic-pathological factors at the baseline. The dataset included 70 samples and 15 features. We initially developed three classification models with accuracy ranging from 55% to 70%. We then compared ten different ML algorithms. In all but one case, the performance of the Multinomial Naive Bayes algorithm (80%) was the highest. The support vector machine classifier (SVC) had a higher performance for the recall metric of the progression-free outcome (97% vs. 94%). Overall, for the first time, we documented that the factors that mainly influenced progression-free survival (PFS) included age, the number of metastatic sites and the primary site. In addition, the following factors were also isolated as important: adverse events G3-G4, sex, Ki67, metastatic site (liver), functioning NET, the primary site and the stage. In patients with advanced NETs, ML provides a predictive model that could potentially be used to differentiate prognostic groups and to identify patients for whom SSA therapy as a single agent may not be sufficient to achieve a long-lasting PFS

    Predicting Corrosion Damage in the Human Body Using Artificial Intelligence: In Vitro Progress and Future Applications Applications

    Get PDF
    Artificial intelligence (AI) is used in the clinic to improve patient care. While the successes illustrate the impact AI can have, few studies have led to improved clinical outcomes. A gap in translational studies, beginning at the basic science level, exists. In this review, we focus on how AI models implemented in non-orthopedic fields of corrosion science may apply to the study of orthopedic alloys. We first define and introduce fundamental AI concepts and models, as well as physiologically relevant corrosion damage modes. We then systematically review the corrosion/AI literature. Finally, we identify several AI models that may be Preprint implemented to study fretting, crevice, and pitting corrosion of titanium and cobalt chrome alloys

    Pediatric Bone Age Analysis and Brain Disease Prediction for Computer-Aided Diagnosis

    Get PDF
    Recent advances in 3D scanning technology have led to a widespread use of 3D shapes in a multitude of fields, including computer vision and medical imaging. These shapes are, however, often contaminated by noise, which needs to be removed or attenuated in order to ensure high-quality 3D shapes for subsequent use in downstream tasks. On the other hand, the availability of largescale pediatric hand radiographs and brain imaging benchmarks has sparked a surge of interest in designing efficient techniques for bone age assessment and brain disease prediction, which are fundamental problems in computer-aided diagnosis. Bone age is an effective metric for assessing the skeletal and biological maturity of children, while understanding how the brain develops is crucial for designing prediction models for the classification of brain disorders. In this thesis, we present a feature-preserving framework for carpal bone surface denoising in the graph signal processing setting. The proposed denoising framework is formulated as a constrained optimization problem with an objective function comprised of a fidelity term specified by a noise model and a regularization term associated with data prior. We show through experimental results that our approach can remove noise effectively while preserving the nonlinear features of surfaces, such as curved surface regions and fine details. Moreover, recovering high quality surfaces from noisy carpal bone surfaces is of paramount importance to the diagnosis of wrist pathologies, such as arthritis and carpal tunnel syndrome. We also introduce a deep learning approach to pediatric bone age assessment using instance segmentation and ridge regression. This approach is comprised of two intertwined stages. In the first stage, we employ an image annotation and instance segmentation model to extract and separate different regions of interests in an image. In the second stage, we leverage the power of transfer learning by designing a deep neural network with a ridge regression output layer. For the classification of brain disorders, we propose an aggregator normalization graph convolutional network by exploiting aggregation in graph sampling, skip connections and identity mapping. We also integrate both imaging and non-imaging features into the graph nodes and edges, respectively, with the aim of augmenting predictive capabilities. We validate our proposed approaches through extensive experiments on various benchmark datasets, demonstrating competitive performance in comparison with strong baseline methods
    • …
    corecore