2,056 research outputs found

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (ForlĂŹ Campus) in collaboration with the Romagna Chamber of Commerce (ForlĂŹ-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Acute Stroke Care: Strategies For Improving Diagnostics

    Get PDF
    Stroke is one of the leading causes of death and disability, with a high incidence of over 11 million cases annually worldwide. Costs of treatment and rehabilitation, loss of work, and the hardships resulting from stroke are a major burden both at the individual and at the societal level. Importantly, stroke therapies need to be initiated early for them to be effective. Thrombolytic therapy and mechanical thrombectomy are early treatment options of ischemic stroke. In hemorrhagic stroke, optimization of hemodynamic and hemostatic parameters is central, and surgery is considered in a subset of patients. Efficient treatment of stroke requires early and precise recognition of stroke at all stages of the treatment chain. This includes identification of patients with suspected acute stroke by emergency medical dispatchers and emergency medical services staff, and precise admission diagnostics by the receiving on-call stroke team. Success requires grasping the complexity of stroke symptoms that depend on the brain areas affected, and the plethora of medical conditions that can mimic stroke. The Helsinki Ultra-acute Stroke Biomarker Study includes a cohort of 1015 patients transported to hospital due to suspected acute stroke, as candidates for revascularization therapies. Based on this cohort, this thesis work has explored new avenues to improve early stroke diagnostics in all stages of the treatment chain. In a detailed investigation into the identification of stroke by emergency medical dispatchers, we analyzed emergency phone calls with missed stroke identification. We also combined data on dispatch and EMS and hospital records to identify causes for missing stroke during emergency calls. Most importantly, we found that a patient’s fall at onset and patient confusion were strongly associated with missed identification. Regarding the Face Arm Speech Test (FAST), the most likely symptom to be misidentified was acute speech disturbance. Using prehospital blood sampling of stroke patients, and ultrasensitive measurement, we investigated the early dynamics of the plasma biomarkers glial fibrillary acidic protein (GFAP) and total tau. Utilizing serial sampling, we demonstrate for the first time that monitoring the early release rate of GFAP can improve the diagnostic performance of this biomarker for early differentiation between ischemic and hemorrhagic stroke. In our analysis of early GFAP levels, we were able to differentiate with high accuracy two-thirds of all patients with acute cerebral ischemia from those with hemorrhagic stroke, supporting further investigation of this biomarker as a promising point-of-care tool for prehospital stroke diagnostics. We performed a detailed review of the admission diagnostics of our cohort of 1015 patients to explore causes and predictors of admission misdiagnosis. We then investigated the consequences of misdiagnosis on outcomes. We demonstrate in this large cohort that the highly optimized and rapid admission evaluation in our hospital district (door-to-needle times below 20 minutes) did not compromise the accuracy and safety of admission evaluation. In addition, we discovered targets for improving future diagnostics. Finally, our detailed neuropathological investigation of a case of cerebral amyloid angiopathy (CAA) -related hemorrhage after stroke thrombolysis provided unique tissue-level evidence for this common vasculopathy as a notable risk factor for intracranial hemorrhagic complications in the setting of stroke. These findings support research to improve the diagnostics of CAA, and the prediction of hemorrhagic complications associated with stroke thrombolysis. In conclusion, these proposed targets and strategies will aid in the future improvement and development of this highly important field of diagnostics. Our proof-of-concept discoveries on early GFAP kinetics help guide further study into this diagnostic approach just as highly sensitive point-of-care GFAP measurement instruments are becoming available. Finally, our results support the safety of worldwide efforts to optimize emergency department door-to- needle times when care is taken to ensure sufficient expertise is in place, highlighting the role of the on-call vascular neurologist as a central diagnostic asset.Aivohalvaus on yksi yleisimpiĂ€ kuolinsyitĂ€ ja pitkĂ€kestoisen työkyvyttömyyden aiheuttajia. Aivohalvauksen aiheuttamat hoito- ja kuntoutuskustannukset, työkyvyn menetys ja arkielĂ€mĂ€n vaikeudet ovat mittava taakka sekĂ€ yksilön, lĂ€heisten ettĂ€ yhteiskunnan tasoilla. Tehokkaiden hoitojen vaatima nopeus edellyttÀÀ aivohalvauksen varhaista ja tarkkaa tunnistamista hoitoketjun kaikilla askelmilla. TĂ€ssĂ€ vĂ€itöskirjatyössĂ€ etsittiin uusia keinoja aivohalvauksen varhaisdiagnostiikan kehittĂ€miseksi hĂ€tĂ€keskuksessa, ensihoidossa ja vastaanottavan sairaalan HYKS:n pĂ€ivystyspoliklinikalla. Yksityiskohtainen analyysi aivohalvauksen tunnistamisesta hĂ€tĂ€keskuksessa osoitti, ettĂ€ potilaan kaatuminen ja sekavuus olivat puutteellisen tunnistamisen keskeisiĂ€ tekijöitĂ€. Face Arm Speech Test (FAST) -seulontaoireista puhehĂ€iriö oli todennĂ€köisimmin vÀÀrin tunnistettu. Akuuttivaiheen verinĂ€ytteitĂ€ ja ÀÀrimmĂ€isen herkkÀÀ mÀÀritysmenetelmÀÀ hyödyntĂ€en tutkimme kahden verestĂ€ mitattavan merkkiaineen, aivojen tukikudoksen tĂ€htisolujen sĂ€ikeisen happaman proteiinin (GFAP) ja taun varhaista dynamiikkaa aivohalvauspotilailla. Osoitimme ensimmĂ€istĂ€ kertaa, ettĂ€ GFAP:n varhaisen vapautumisnopeuden seurantaa sarjanĂ€ytteistĂ€ voidaan hyödyntÀÀ parantamaan tĂ€mĂ€n merkkiaineen erottelukykyĂ€ iskeemisen ja hemorragisen aivokudosvaurion varhaisdiagnostiikassa. Tulokset viittaavat siihen, että GFAP merkkiaine voisi olla jatkossa kehitettävissä ambulansseissa hyödynnettäväksi pikaverikokeeksi, joka auttaisi aivohalvauksen eri muotojen varhaisessa erottelussa. PĂ€ivystysdiagnostiikkaan keskittyvĂ€ssĂ€ osatyössĂ€ osoitimme ensimmĂ€istĂ€ kertaa suuressa aineistossa, ettĂ€ sairaanhoitopiirissĂ€mme vuosia optimoitu erittĂ€in nopea vastaanottoarviointi (liuotushoidon mediaaniviive alle 20 minuuttia sisĂ€ltĂ€en pÀÀn kuvauksen) ei vaaranna aivohalvauspotilaiden diagnostiikan tarkkuutta ja hoidon turvallisuutta. TĂ€ssĂ€ vĂ€itöskirjatyössĂ€ esitetyt kehityskohteet ja menetelmĂ€t auttavat tĂ€mĂ€n erittĂ€in tĂ€rkeĂ€n diagnostisen alan tulevassa kehitystyössĂ€. TyössĂ€ kuvatut tulokset sisĂ€ltĂ€vĂ€t uraauurtavia havaintoja verestĂ€ mitattavan GFAP merkkiaineen kinetiikan kĂ€ytöstĂ€ aivohalvauksen varhaisdiagnostiikassa ja tukevat sairaalapĂ€ivystysarvion diagnostista tarkkuutta HYKS:n tunnetusti erittĂ€in nopeassa liuotushoitoketjussa

    Schizo-Net: A novel Schizophrenia Diagnosis Framework Using Late Fusion Multimodal Deep Learning on Electroencephalogram-Based Brain Connectivity Indices

    Get PDF
    Schizophrenia (SCZ) is a serious mental condition that causes hallucinations, delusions, and disordered thinking. Traditionally, SCZ diagnosis involves the subject’s interview by a skilled psychiatrist. The process needs time and is bound to human errors and bias. Recently, brain connectivity indices have been used in a few pattern recognition methods to discriminate neuro-psychiatric patients from healthy subjects. The study presents Schizo-Net , a novel, highly accurate, and reliable SCZ diagnosis model based on a late multimodal fusion of estimated brain connectivity indices from EEG activity. First, the raw EEG activity is pre-processed exhaustively to remove unwanted artifacts. Next, six brain connectivity indices are estimated from the windowed EEG activity, and six different deep learning architectures (with varying neurons and hidden layers) are trained. The present study is the first which considers a large number of brain connectivity indices, especially for SCZ. A detailed study was also performed that identifies SCZ-related changes occurring in brain connectivity, and the vital significance of BCI is drawn in this regard to identify the biomarkers of the disease. Schizo-Net surpasses current models and achieves 99.84% accuracy. An optimum deep learning architecture selection is also performed for improved classification. The study also establishes that Late fusion technique outperforms single architecture-based prediction in diagnosing SCZ

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This ïŹfth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ïŹelds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiïŹed Proportional ConïŹ‚ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiïŹers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiïŹcation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiïŹcation. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiïŹcation, and hybrid techniques mixing deep learning with belief functions as well

    30th European Congress on Obesity (ECO 2023)

    Get PDF
    This is the abstract book of 30th European Congress on Obesity (ECO 2023

    Riemannian statistical techniques with applications in fMRI

    Get PDF
    Over the past 30 years functional magnetic resonance imaging (fMRI) has become a fundamental tool in cognitive neuroimaging studies. In particular, the emergence of restingstate fMRI has gained popularity in determining biomarkers of mental health disorders (Woodward & Cascio, 2015). Resting-state fMRI can be analysed using the functional connectivity matrix, an object that encodes the temporal correlation of blood activity within the brain. Functional connectivity matrices are symmetric positive definite (SPD) matrices, but common analysis methods either reduce the functional connectivity matrices to summary statistics or fail to account for the positive definite criteria. However, through the lens of Riemannian geometry functional connectivity matrices have an intrinsic non-linear shape that respects the positive definite criteria (the affine-invariant geometry (Pennec, Fillard, & Ayache, 2006)). With methods from Riemannian geometric statistics, we can begin to explore the shape of the functional brain to understand this non-linear structure and reduce data-loss in our analyses. This thesis o↔ers two novel methodological developments to the field of Riemannian geometric statistics inspired by methods used in fMRI research. First we propose geometric- MDMR, a generalisation of multivariate distance matrix regression (MDMR) (McArdle & Anderson, 2001) to Riemannian manifolds. Our second development is Riemannian partial least squares (R-PLS), the generalisation of the predictive modelling technique partial least squares (PLS) (H. Wold, 1975) to Riemannian manifolds. R-PLS extends geodesic regression (Fletcher, 2013) to manifold-valued response and predictor variables, similar to how PLS extends multiple linear regression. We also generalise the NIPALS algorithm to Riemannian manifolds and suggest a tangent space approximation as a proposed method to fit R-PLS. In addition to our methodological developments, this thesis o↔ers three more contributions to the literature. Firstly, we develop a novel simulation procedure to simulate realistic functional connectivity matrices through a combination of bootstrapping and the Wishart distribution. Second, we propose the R2S statistic for measuring subspace similarity using the theory of principal angles between subspaces. Finally, we propose an extension of the VIP statistic from PLS (S. Wold, Johansson, & Cocchi, 1993) to describe the relationship between individual predictors and response variables when predicting a multivariate response with PLS. All methods in this thesis are applied to two fMRI datasets: the COBRE dataset relating to schizophrenia, and the ABIDE dataset relating to Autism Spectrum Disorder (ASD). We show that geometric-MDMR can detect group-based di↔erences between ASD and neurotypical controls (NTC), unlike its Euclidean counterparts. We also demonstrate the efficacy of R-PLS through the detection of functional connections related to schizophrenia and ASD. These results are encouraging for the role of Riemannian geometric statistics in the future of neuroscientific research.Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 202

    Mental stimulation and multimodal trials to prevent cognitive impairment and Alzheimer ́s disease

    Get PDF
    Theoretical models of dynamic biomarkers underlying the development of AlzheimerÂŽs Disease (AD) acknowledge that there is inter-individual variability in the cognitive performance associated with any level of AD pathology. Mentally stimulating activities such as schooling, occupation, and leisure activities, may contribute to this variability, but it is yet unclear how this can be best assessed, and how such effects can vary across AD severity and among individuals at-risk for cognitive impairment. The association between mental stimulation and cognitive performance also suggests that it is important to account for mental stimulation levels in randomized clinical trials (RCTs) comparing rates of cognitive change between interventions (i.e., drugs, lifestyle interventions) and controls. The aim of this thesis was to investigate a) how pre-existing levels of occupational complexity affect the cognitive outcomes of a multimodal lifestyle-based RCT among older adults at increased risk for dementia based on a validated risk score b) if occupational complexity is associated to cognitive performance among individuals at-risk for dementia, including individuals in the early stages of symptomatic AD (prodromal AD) and c) if occupational complexity is associated with resilience to AD pathology, measured with validated biomarkers and neuroimaging among individuals at-risk for cognitive impairment and with prodromal AD. The four studies in this thesis were based on data from the Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER), the Karolinska University Hospital electronic database and biobank for clinical research (GEDOC) and The Multimodal Prevention Trial for AlzheimerÂŽs Disease (MIND-ADmini). Study I. This study used data from the FINGER study (N=1026) to investigate if preexisting levels of occupational complexity were associated with cognitive function at baseline, and if occupational complexity was associated with the rate of change in cognition during the 2-year intervention period. For all measures of occupational complexity, higher levels of complexity were associated with better cognitive outcomes at baseline. Occupational complexity was not associated with the rate of cognitive change during the intervention, except for the executive function outcome, for which higher levels of complexity with data predicted increased improvement ((ß[SE]: .028[.014], p=.044). Study II. This study used data from the FINGER neuroimaging cohort, to investigate if the association between occupational complexity and cognition was moderated by measures of brain integrity, both in terms of magnetic resonance imaging (MRI, N=126) and Pittsburgh-B Compound – Positron Emission Tomography (PiB-PET, N=41). The results showed that higher levels of occupational complexity were associated with better cognitive performance for some outcomes after adjusting for Alzheimer’s Disease Signature (ADS) and medial temporal atrophy (MTA). However, for most types of neuropathology and cognitive outcomes, moderation effects indicated that higher occupational complexity levels were associated with better cognitive performance only in people with higher brain integrity, suggesting lack of occupational complexity-related resilience mechanisms. Study III. This study investigated the association between mental stimulation (occupational complexity and education) and validated AD biomarkers, AÎČ1–42, p-tau and t-tau measured in cerebrospinal fluid (CSF). Using data from the GEDOC database, 174 individuals with prodromal AD were included, and analyses were adjusted for cognitive function. The results indicated that both higher occupational complexity and education were associated with higher levels of p-tau and t-tau. For education the association with tau pathology was age dependent. No association was found with AÎČ1– 42. This suggests that higher education and occupational complexity may provide resilience against tau-related pathology in prodromal AD. Study IV. This study used data from FINGER, GEDOC, and MIND-ADmini, thus including a total of 1410 individuals, 1207 at-risk for dementia and 203 with Prodromal AD. The aim was to to compare the two most common rating systems for occupational complexity, the Occupation Information Network (O*NET) and the Dictionary of Occupational Titles (DOT) and assess if there was an association between occupational complexity and episodic memory performance among individuals at-risk for dementia. The study found that higher occupational complexity was only associated with memory performance in the FINGER cohort but not the two prodromal AD cohorts. The correlation between the two rating systems was moderate to strong, and highly significant (Spearman’s rho = 0.5-0.6, p <.001). Conclusions. Higher levels of Occupational complexity are associated with better cognitive performance among older individuals at-risk for dementia (and with no substantial cognitive impairment), but does not affect the intervention effect in the FINGER multidomain lifestyle-based RCT, apart from the effect on executive function. Occupational complexity does not seem to provide strong resilience against neuropathology among individuals at-risk for cognitive impairment. Among individuals with prodromal AD, higher levels of occupational complexity do seem to provide resilience to tau-related pathology measured with CSF markers but is not associated with better episodic memory performance. Measuring occupational complexity with the DOT or O*NET system seems to yield similar results, as the two systems scores are correlated

    Online Machine Learning for Inference from Multivariate Time-series

    Get PDF
    Inference and data analysis over networks have become significant areas of research due to the increasing prevalence of interconnected systems and the growing volume of data they produce. Many of these systems generate data in the form of multivariate time series, which are collections of time series data that are observed simultaneously across multiple variables. For example, EEG measurements of the brain produce multivariate time series data that record the electrical activity of different brain regions over time. Cyber-physical systems generate multivariate time series that capture the behaviour of physical systems in response to cybernetic inputs. Similarly, financial time series reflect the dynamics of multiple financial instruments or market indices over time. Through the analysis of these time series, one can uncover important details about the behavior of the system, detect patterns, and make predictions. Therefore, designing effective methods for data analysis and inference over networks of multivariate time series is a crucial area of research with numerous applications across various fields. In this Ph.D. Thesis, our focus is on identifying the directed relationships between time series and leveraging this information to design algorithms for data prediction as well as missing data imputation. This Ph.D. thesis is organized as a compendium of papers, which consists of seven chapters and appendices. The first chapter is dedicated to motivation and literature survey, whereas in the second chapter, we present the fundamental concepts that readers should understand to grasp the material presented in the dissertation with ease. In the third chapter, we present three online nonlinear topology identification algorithms, namely NL-TISO, RFNL-TISO, and RFNL-TIRSO. In this chapter, we assume the data is generated from a sparse nonlinear vector autoregressive model (VAR), and propose online data-driven solutions for identifying nonlinear VAR topology. We also provide convergence guarantees in terms of dynamic regret for the proposed algorithm RFNL-TIRSO. Chapters four and five of the dissertation delve into the issue of missing data and explore how the learned topology can be leveraged to address this challenge. Chapter five is distinct from other chapters in its exclusive focus on edge flow data and introduces an online imputation strategy based on a simplicial complex framework that leverages the known network structure in addition to the learned topology. Chapter six of the dissertation takes a different approach, assuming that the data is generated from nonlinear structural equation models. In this chapter, we propose an online topology identification algorithm using a time-structured approach, incorporating information from both the data and the model evolution. The algorithm is shown to have convergence guarantees achieved by bounding the dynamic regret. Finally, chapter seven of the dissertation provides concluding remarks and outlines potential future research directions.publishedVersio

    Using XAI in the Clock Drawing Test to reveal the cognitive impairment pattern.

    Get PDF
    he prevalence of dementia is currently increasing worldwide. This syndrome produces a deteriorationin cognitive function that cannot be reverted. However, an early diagnosis can be crucial for slowing itsprogress. The Clock Drawing Test (CDT) is a widely used paper-and-pencil test for cognitive assessmentin which an individual has to manually draw a clock on a paper. There are a lot of scoring systems forthis test and most of them depend on the subjective assessment of the expert. This study proposes acomputer-aided diagnosis (CAD) system based on artificial intelligence (AI) methods to analyze the CDTand obtain an automatic diagnosis of cognitive impairment (CI). This system employs a preprocessingpipeline in which the clock is detected, centered and binarized to decrease the computational burden.Then, the resulting image is fed into a Convolutional Neural Network (CNN) to identify the informativepatterns within the CDT drawings that are relevant for the assessment of the patient’s cognitive status.Performance is evaluated in a real context where patients with CI and controls have been classified byclinical experts in a balanced sample size of 3282 drawings. The proposed method provides an accuracyof 75.65% in the binary case-control classification task, with an AUC of 0.83. These results are indeedrelevant considering the use of the classic version of the CDT. The large size of the sample suggests thatthe method proposed has a high reliability to be used in clinical contexts and demonstrates the suitabilityof CAD systems in the CDT assessment process. Explainable artificial intelligence (XAI) methods areapplied to identify the most relevant regions during classification. Finding these patterns is extremelyhelpful to understand the brain damage caused by CI. A validation method using resubstitution withupper bound correction in a machine learning approach is also discusseThis work was supported by the MCIN/ AEI/10.13039/501100011033/ and FEDER “Una manera de hacer Europa” under the RTI2018- 098913-B100 project, by the Consejeria de Economia, Innovacion, Ciencia y Empleo (Junta de An765 dalucia) and FEDER under CV20-45250, A-TIC080-UGR18, B-TIC-586-UGR20 and P20-00525 projects, and by the Ministerio de Universidades under the FPU18/04902 grant given to C. JimenezMesa and the Margarita-Salas grant to J.E. Arco
    • 

    corecore