140 research outputs found

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (Forlì Campus) in collaboration with the Romagna Chamber of Commerce (Forlì-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    MDAS: a new multimodal benchmark dataset for remote sensing

    Get PDF
    In Earth observation, multimodal data fusion is an intuitive strategy to break the limitation of individual data. Complementary physical contents of data sources allow comprehensive and precise information retrieval. With current satellite missions, such as ESA Copernicus programme, various data will be accessible at an affordable cost. Future applications will have many options for data sources. Such a privilege can be beneficial only if algorithms are ready to work with various data sources. However, current data fusion studies mostly focus on the fusion of two data sources. There are two reasons; first, different combinations of data sources face different scientific challenges. For example, the fusion of synthetic aperture radar (SAR) data and optical images needs to handle the geometric difference, while the fusion of hyperspectral and multispectral images deals with different resolutions on spatial and spectral domains. Second, nowadays, it is still both financially and labour expensive to acquire multiple data sources for the same region at the same time. In this paper, we provide the community with a benchmark multimodal data set, MDAS, for the city of Augsburg, Germany. MDAS includes synthetic aperture radar data, multispectral image, hyperspectral image, digital surface model (DSM), and geographic information system (GIS) data. All these data are collected on the same date, 7 May 2018. MDAS is a new benchmark data set that provides researchers rich options on data selections. In this paper, we run experiments for three typical remote sensing applications, namely, resolution enhancement, spectral unmixing, and land cover classification, on MDAS data set. Our experiments demonstrate the performance of representative state-of-the-art algorithms whose outcomes can serve as baselines for further studies. The dataset is publicly available at https://doi.org/10.14459/2022mp1657312 (Hu et al., 2022a) and the code (including the pre-trained models) at https://doi.org/10.5281/zenodo.7428215 (Hu et al., 2022b)

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Clinical microbiology with multi-view deep probabilistic models

    Get PDF
    Clinical microbiology is one of the critical topics of this century. Identification and discrimination of microorganisms is considered a global public health threat by the main international health organisations, such as World Health Organisation (WHO) or the European Centre for Disease Prevention and Control (ECDC). Rapid spread, high morbidity and mortality, as well as the economic burden associated with their treatment and control are the main causes of their impact. Discrimination of microorganisms is crucial for clinical applications, for instance, Clostridium difficile (C. diff ) increases the mortality and morbidity of healthcare-related infections. Furthermore, in the past two decades, other bacteria, including Klebsiella pneumoniae (K. pneumonia), have demonstrated a significant propensity to acquire antibiotic resistance mechanisms. Consequently, the use of an ineffective antibiotic may result in mortality. Machine Learning (ML) has the potential to be applied in the clinical microbiology field to automatise current methodologies and provide more efficient guided personalised treatments. However, microbiological data are challenging to exploit owing to the presence of a heterogeneous mix of data types, such as real-valued high-dimensional data, categorical indicators, multilabel epidemiological data, binary targets, or even time-series data representations. This problem, which in the field of ML is known as multi-view or multi-modal representation learning, has been studied in other application fields such as mental health monitoring or haematology. Multi-view learning combines different modalities or views representing the same data to extract richer insights and improve understanding. Each modality or view corresponds to a distinct encoding mechanism for the data, and this dissertation specifically addresses the issue of heterogeneity across multiple views. In the probabilistic ML field, the exploitation of multi-view learning is also known as Bayesian Factor Analysis (FA). Current solutions face limitations when handling high-dimensional data and non-linear associations. Recent research proposes deep probabilistic methods to learn hierarchical representations of the data, which can capture intricate non-linear relationships between features. However, some Deep Learning (DL) techniques rely on complicated representations, which can hinder the interpretation of the outcomes. In addition, some inference methods used in DL approaches can be computationally burdensome, which can hinder their practical application in real-world situations. Therefore, there is a demand for more interpretable, explainable, and computationally efficient techniques for highdimensional data. By combining multiple views representing the same information, such as genomic, proteomic, and epidemiologic data, multi-modal representation learning could provide a better understanding of the microbial world. Hence, in this dissertation, the development of two deep probabilistic models, that can handle current limitations in state-of-the-art of clinical microbiology, are proposed. Moreover, both models are also tested in two real scenarios regarding antibiotic resistance prediction in K. pneumoniae and automatic ribotyping of C. diff in collaboration with the Instituto de Investigación Sanitaria Gregorio Marañón (IISGM) and the Instituto Ramón y Cajal de Investigación Sanitaria (IRyCIS). The first presented algorithm is the Kernelised Sparse Semi-supervised Heterogeneous Interbattery Bayesian Analysis (SSHIBA). This algorithm uses a kernelised formulation to handle non-linear data relationships while providing compact representations through the automatic selection of relevant vectors. Additionally, it uses an Automatic Relevance Determination (ARD) over the kernel to determine the input feature relevance functionality. Then, it is tailored and applied to the microbiological laboratories of the IISGM and IRyCIS to predict antibiotic resistance in K. pneumoniae. To do so, specific kernels that handle Matrix-Assisted Laser Desorption Ionization (MALDI)-Time-Of-Flight (TOF) mass spectrometry of bacteria are used. Moreover, by exploiting the multi-modal learning between the spectra and epidemiological information, it outperforms other state-of-the-art algorithms. Presented results demonstrate the importance of heterogeneous models that can analyse epidemiological information and can automatically be adjusted for different data distributions. The implementation of this method in microbiological laboratories could significantly reduce the time required to obtain resistance results in 24-72 hours and, moreover, improve patient outcomes. The second algorithm is a hierarchical Variational AutoEncoder (VAE) for heterogeneous data using an explainable FA latent space, called FA-VAE. The FA-VAE model is built on the foundation of the successful KSSHIBA approach for dealing with semi-supervised heterogeneous multi-view problems. This approach further expands the range of data domains it can handle. With the ability to work with a wide range of data types, including multilabel, continuous, binary, categorical, and even image data, the FA-VAE model offers a versatile and powerful solution for real-world data sets, depending on the VAE architecture. Additionally, this model is adapted and used in the microbiological laboratory of IISGM, resulting in an innovative technique for automatic ribotyping of C. diff, using MALDI-TOF data. To the best of our knowledge, this is the first demonstration of using any kind of ML for C. diff ribotyping. Experiments have been conducted on strains of Hospital General Universitario Gregorio Marañón (HGUGM) to evaluate the viability of the proposed approach. The results have demonstrated high accuracy rates where KSSHIBA even achieved perfect accuracy in the first data collection. These models have also been tested in a real-life outbreak scenario at the HGUGM, where successful classification of all outbreak samples has been achieved by FAVAE. The presented results have not only shown high accuracy in predicting each strain’s ribotype but also revealed an explainable latent space. Furthermore, traditional ribotyping methods, which rely on PCR, required 7 days while FA-VAE has predicted equal results on the same day. This improvement has significantly reduced the time response by helping in the decision-making of isolating patients with hyper-virulent ribotypes of C. diff on the same day of infection. The promising results, obtained in a real outbreak, have provided a solid foundation for further advancements in the field. This study has been a crucial stepping stone towards realising the full potential of MALDI-TOF for bacterial ribotyping and advancing our ability to tackle bacterial outbreaks. In conclusion, this doctoral thesis has significantly contributed to the field of Bayesian FA by addressing its drawbacks in handling various data types through the creation of novel models, namely KSSHIBA and FA-VAE. Additionally, a comprehensive analysis of the limitations of automating laboratory procedures in the microbiology field has been carried out. The shown effectiveness of the newly developed models has been demonstrated through their successful implementation in critical problems, such as predicting antibiotic resistance and automating ribotyping. As a result, KSSHIBA and FA-VAE, both in terms of their technical and practical contributions, signify noteworthy progress both in the clinical and the Bayesian statistics fields. This dissertation opens up possibilities for future advancements in automating microbiological laboratories.La microbiología clínica es uno de los temas críticos de este siglo. La identificación y discriminación de microorganismos se considera una amenaza mundial para la salud pública por parte de las principales organizaciones internacionales de salud, como la Organización Mundial de la Salud (OMS) o el Centro Europeo para la Prevención y Control de Enfermedades (ECDC). La rápida propagación, alta morbilidad y mortalidad, así como la carga económica asociada con su tratamiento y control, son las principales causas de su impacto. La discriminación de microorganismos es crucial para aplicaciones clínicas, como el caso de Clostridium difficile (C. diff ), el cual aumenta la mortalidad y morbilidad de las infecciones relacionadas con la atención médica. Además, en las últimas dos décadas, otros tipos de bacterias, incluyendo Klebsiella pneumoniae (K. pneumonia), han demostrado una propensión significativa a adquirir mecanismos de resistencia a los antibióticos. En consecuencia, el uso de un antibiótico ineficaz puede resultar en un aumento de la mortalidad. El aprendizaje automático (ML) tiene el potencial de ser aplicado en el campo de la microbiología clínica para automatizar las metodologías actuales y proporcionar tratamientos personalizados más eficientes y guiados. Sin embargo, los datos microbiológicos son difíciles de explotar debido a la presencia de una mezcla heterogénea de tipos de datos, tales como datos reales de alta dimensionalidad, indicadores categóricos, datos epidemiológicos multietiqueta, objetivos binarios o incluso series temporales. Este problema, conocido en el campo del aprendizaje automático (ML) como aprendizaje multimodal o multivista, ha sido estudiado en otras áreas de aplicación, como en el monitoreo de la salud mental o la hematología. El aprendizaje multivista combina diferentes modalidades o vistas que representan los mismos datos para extraer conocimientos más ricos y mejorar la comprensión. Cada vista corresponde a un mecanismo de codificación distinto para los datos, y esta tesis aborda particularmente el problema de la heterogeneidad multivista. En el campo del aprendizaje automático probabilístico, la explotación del aprendizaje multivista también se conoce como Análisis de Factores (FA) Bayesianos. Las soluciones actuales enfrentan limitaciones al manejar datos de alta dimensionalidad y correlaciones no lineales. Investigaciones recientes proponen métodos probabilísticos profundos para aprender representaciones jerárquicas de los datos, que pueden capturar relaciones no lineales intrincadas entre características. Sin embargo, algunas técnicas de aprendizaje profundo (DL) se basan en representaciones complejas, dificultando así la interpretación de los resultados. Además, algunos métodos de inferencia utilizados en DL pueden ser computacionalmente costosos, obstaculizando su aplicación práctica. Por lo tanto, existe una demanda de técnicas más interpretables, explicables y computacionalmente eficientes para datos de alta dimensionalidad. Al combinar múltiples vistas que representan la misma información, como datos genómicos, proteómicos y epidemiológicos, el aprendizaje multimodal podría proporcionar una mejor comprensión del mundo microbiano. Dicho lo cual, en esta tesis se proponen el desarrollo de dos modelos probabilísticos profundos que pueden manejar las limitaciones actuales en el estado del arte de la microbiología clínica. Además, ambos modelos también se someten a prueba en dos escenarios reales relacionados con la predicción de resistencia a los antibióticos en K. pneumoniae y el ribotipado automático de C. diff en colaboración con el IISGM y el IRyCIS. El primer algoritmo presentado es Kernelised Sparse Semi-supervised Heterogeneous Interbattery Bayesian Analysis (SSHIBA). Este algoritmo utiliza una formulación kernelizada para manejar correlaciones no lineales proporcionando representaciones compactas a través de la selección automática de vectores relevantes. Además, utiliza un Automatic Relevance Determination (ARD) sobre el kernel para determinar la relevancia de las características de entrada. Luego, se adapta y aplica a los laboratorios microbiológicos del IISGM y IRyCIS para predecir la resistencia a antibióticos en K. pneumoniae. Para ello, se utilizan kernels específicos que manejan la espectrometría de masas Matrix-Assisted Laser Desorption Ionization (MALDI)-Time-Of-Flight (TOF) de bacterias. Además, al aprovechar el aprendizaje multimodal entre los espectros y la información epidemiológica, supera a otros algoritmos de última generación. Los resultados presentados demuestran la importancia de los modelos heterogéneos ya que pueden analizar la información epidemiológica y ajustarse automáticamente para diferentes distribuciones de datos. La implementación de este método en laboratorios microbiológicos podría reducir significativamente el tiempo requerido para obtener resultados de resistencia en 24-72 horas y, además, mejorar los resultados para los pacientes. El segundo algoritmo es un modelo jerárquico de Variational AutoEncoder (VAE) para datos heterogéneos que utiliza un espacio latente con un FA explicativo, llamado FA-VAE. El modelo FA-VAE se construye sobre la base del enfoque de KSSHIBA para tratar problemas semi-supervisados multivista. Esta propuesta amplía aún más el rango de dominios que puede manejar incluyendo multietiqueta, continuos, binarios, categóricos e incluso imágenes. De esta forma, el modelo FA-VAE ofrece una solución versátil y potente para conjuntos de datos realistas, dependiendo de la arquitectura del VAE. Además, este modelo es adaptado y utilizado en el laboratorio microbiológico del IISGM, lo que resulta en una técnica innovadora para el ribotipado automático de C. diff utilizando datos MALDI-TOF. Hasta donde sabemos, esta es la primera demostración del uso de cualquier tipo de ML para el ribotipado de C. diff. Se han realizado experimentos en cepas del Hospital General Universitario Gregorio Marañón (HGUGM) para evaluar la viabilidad de la técnica propuesta. Los resultados han demostrado altas tasas de precisión donde KSSHIBA incluso logró una clasificación perfecta en la primera colección de datos. Estos modelos también se han probado en un brote real en el HGUGM, donde FA-VAE logró clasificar con éxito todas las muestras del mismo. Los resultados presentados no solo han demostrado una alta precisión en la predicción del ribotipo de cada cepa, sino que también han revelado un espacio latente explicativo. Además, los métodos tradicionales de ribotipado, que dependen de PCR, requieren 7 días para obtener resultados mientras que FA-VAE ha predicho resultados correctos el mismo día del brote. Esta mejora ha reducido significativamente el tiempo de respuesta ayudando así en la toma de decisiones para aislar a los pacientes con ribotipos hipervirulentos de C. diff el mismo día de la infección. Los resultados prometedores, obtenidos en un brote real, han sentado las bases para nuevos avances en el campo. Este estudio ha sido un paso crucial hacia el despliegue del pleno potencial de MALDI-TOF para el ribotipado bacteriana avanzado así nuestra capacidad para abordar brotes bacterianos. En conclusión, esta tesis doctoral ha contribuido significativamente al campo del FA Bayesiano al abordar sus limitaciones en el manejo de tipos de datos heterogéneos a través de la creación de modelos noveles, concretamente, KSSHIBA y FA-VAE. Además, se ha llevado a cabo un análisis exhaustivo de las limitaciones de la automatización de procedimientos de laboratorio en el campo de la microbiología. La efectividad de los nuevos modelos, en este campo, se ha demostrado a través de su implementación exitosa en problemas críticos, como la predicción de resistencia a los antibióticos y la automatización del ribotipado. Como resultado, KSSHIBA y FAVAE, tanto en términos de sus contribuciones técnicas como prácticas, representan un progreso notable tanto en los campos clínicos como en la estadística Bayesiana. Esta disertación abre posibilidades para futuros avances en la automatización de laboratorios microbiológicos.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Juan José Murillo Fuentes.- Secretario: Jerónimo Arenas García.- Vocal: María de las Mercedes Marín Arriaz

    Recommending on graphs: a comprehensive review from a data perspective

    Full text link
    Recent advances in graph-based learning approaches have demonstrated their effectiveness in modelling users' preferences and items' characteristics for Recommender Systems (RSS). Most of the data in RSS can be organized into graphs where various objects (e.g., users, items, and attributes) are explicitly or implicitly connected and influence each other via various relations. Such a graph-based organization brings benefits to exploiting potential properties in graph learning (e.g., random walk and network embedding) techniques to enrich the representations of the user and item nodes, which is an essential factor for successful recommendations. In this paper, we provide a comprehensive survey of Graph Learning-based Recommender Systems (GLRSs). Specifically, we start from a data-driven perspective to systematically categorize various graphs in GLRSs and analyze their characteristics. Then, we discuss the state-of-the-art frameworks with a focus on the graph learning module and how they address practical recommendation challenges such as scalability, fairness, diversity, explainability and so on. Finally, we share some potential research directions in this rapidly growing area.Comment: Accepted by UMUA

    Applications of machine learning to gravitational waves

    Get PDF
    Gravitational waves, predicted by Albert Einstein in 1916 and first directly observed in 2015, are a powerful window into the universe, and its past. Currently, multiple detectors around the globe are in operation. While the technology has matured to a point where detections are common, there are still unsolved problems. Traditional search algorithms are only optimal under assumptions which do not hold in contemporary detectors. In addition, high data rates and latency requirements can be challenging. In this thesis, we use new methods based on recent advancements in machine learning to tackle these issues. We develop search algorithms competitive with conventional methods in a realistic setting. In doing so, we cover a mock data challenge which we have organized, and which served as a framework to obtain some of these results. Finally, we demonstrate the power of our search algorithms by applying them to data from the second half of LIGO's third observing run. We find that the events targeted by our searches are identified reliably

    Collaborative Techniques for Indoor Positioning Systems

    Get PDF
    The demand for Indoor Positioning Systems (IPSs) developed specifically for mobile and wearable devices is continuously growing as a consequence of the expansion of the global market of Location-based Services (LBS), increasing adoption of mobile LBS applications, and ubiquity of mobile/wearable devices in our daily life. Nevertheless, the design of mobile/wearable devices-based IPSs requires to fulfill additional design requirements, namely low power consumption, reuse of devices’ built-in technologies, and inexpensive and straightforward implementation. Within the available indoor positioning technologies, embedded in mobile/wearable devices, IEEE 802.11 Wireless LAN (Wi-Fi) and Bluetooth Low Energy (BLE) in combination with lateration and fingerprinting have received extensive attention from research communities to meet the requirements. Although these technologies are straightforward to implement in positioning approaches based on Received Signal Strength Indicator (RSSI), the positioning accuracy decreases mainly due to propagation signal fluctuations in Line-of-sight (LOS) and Non-line-of-sight (NLOS), and the heterogeneity of the devices’ hardware. Therefore, providing a solution to achieve the target accuracy within the given constraints remains an open issue. The motivation behind this doctoral thesis is to address the limitations of traditional IPSs for human positioning based on RSSI, which suffer from low accuracy due to signal fluctuations and hardware heterogeneity, and deployment cost constraints, considering the advantages provided by the ubiquity of mobile devices and collaborative and machine learning-based techniques. Therefore, the research undertaken in this doctoral thesis focuses on developing and evaluating mobile device-based collaborative indoor techniques, using Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), for human positioning to enhance the position accuracy of traditional indoor positioning systems based on RSSI (i.e., lateration and fingerprinting) in real-world conditions. The methodology followed during the research consists of four phases. In the first phase, a comprehensive systematic review of Collaborative Indoor Positioning Systems (CIPSs) was conducted to identify the key design aspects and evaluations used in/for CIPSs and the main concerns, limitations, and gaps reported in the literature. In the second phase, extensive experimental data collections using mobile devices and considering collaborative scenarios were performed. The data collected was used to create a mobile device-based BLE database for testing ranging collaborative indoor positioning approaches, and BLE and Wi-Fi radio maps to estimate devices’ position in the non-collaborative phase. Moreover, a detailed description of the methodology used for collecting and processing data and creating the database, as well as its structure, was provided to guarantee the reproducibility, use, and expansion of the database. In the third phase, the traditional methods to estimate distance (i.e., based on Logarithmic Distance Path Loss (LDPL) and fuzzy logic) and position (i.e., RSSI-lateration and fingerprinting–9-Nearest Neighbors (9-NN)) were described and evaluated in order to present their limitations and challenges. Also, two novel approaches to improve distance and positioning accuracy were proposed. In the last phase, our two proposed variants of collaborative indoor positioning system using MLP ANNs were developed to enhance the accuracy of the traditional indoor positioning approaches (BLE–RSSI lateration-based and fingerprinting) and evaluated them under real-world conditions to demonstrate their feasibility and benefits, and to present their limitations and future research avenues. The findings obtained in each of the aforementioned research phases correspond to the main contributions of this doctoral thesis. Specifically, the results of evaluating our CIPSs demonstrated that the first proposed variant of mobile device-based CIPS outperforms the positioning accuracy of the traditional lateration-based IPSs. Considering the distances among collaborating devices, our CIPS significantly outperforms the lateration baseline in short distances (≤ 4m), medium distances (>4m and ≤ 8m), and large distances (> 8m) with a maximum error reduction of 49.15 %, 19.24 %, and 21.48 % for the “median” metric, respectively. Regarding the second variant, the results demonstrated that for short distances between collaborating devices, our collaborative approach outperforms the traditional IPSs based on BLE–fingerprinting and Wi-Fi–fingerprinting with a maximum error reduction of 23.41% and 19.49% for the “75th percentile” and “90th percentile” metric, respectively. For medium distances, our proposed approach outperforms the traditional IPSs based on BLE–fingerprinting in the first 60% and after the 90% of cases in the Empirical Cumulative Distribution Function (ECDF) and only partially (20% of cases in the ECDF) the traditional IPSs based on Wi-Fi–fingerprinting. For larger distances, the performance of our proposed approach is worse than the traditional IPSs based on fingerprinting. Overall, the results demonstrate the usefulness and usability of our CIPSs to improve the positioning accuracy of traditional IPSs, namely IPSs based on BLE– lateration, BLE–fingerprinting, and Wi-Fi–fingerprinting under specific conditions. Mainly, conditions where the collaborative devices have short and medium distances between them. Moreover, the integration of MLP ANNs model in CIPSs allows us to use our approach under different scenarios and technologies, showing its level of generalizability, usefulness, and feasibility.Cotutelle-yhteistyöväitöskirja

    Computational Methods for Protein Inference in Shotgun Proteomics Experiments

    Get PDF
    In den letzten Jahrzehnten kam es zu einem signifikanten Anstiegs des Einsatzes von Hochdurchsatzmethoden in verschiedensten Bereichen der Naturwissenschaften, welche zu einem regelrechten Paradigmenwechsel führte. Eine große Anzahl an neuen Technologien wurde entwickelt um die Quantifizierung von Molekülen, die in verschiedenste biologische Prozesse involviert sind, voranzutreiben und zu beschleunigen. Damit einhergehend konnte eine beträchtliche Steigerung an Daten festgestellt werden, die durch diese verbesserten Methoden generiert wurden. Durch die Bereitstellung von computergestützten Verfahren zur Analyse eben dieser Masse an Rohdaten, spielt der Forschungsbereich der Bioinformatik eine immer größere Rolle bei der Extraktion biologischer Erkenntnisse. Im Speziellen hilft die computergestützte Massenspektrometrie bei der Prozessierung, Analyse und Visualisierung von Daten aus massenspektrometrischen Hochdursatzexperimenten. Bei der Erforschung der Gesamtheit aller Proteine einer Zelle oder einer anderweitigen Probe biologischen Materials, kommen selbst neueste Methoden an ihre Grenzen. Deswegen greifen viele Labore zu einer, dem Massenspektrometer vorgeschalteten, Verdauung der Probe um die Komplexität der zu messenden Moleküle zu verringern. Diese sogenannten "Bottom-up"-Proteomikexperimente mit Massenspektrometern führen allerdings zu einer erhöhten Schwierigkeit bei der anschließenden computergestützen Analyse. Durch die Verdauung von Proteinen zu Peptiden müssen komplexe Mehrdeutigkeiten während Proteininferenz, Proteingruppierung und Proteinquantifizierung berücksichtigt und/oder aufgelöst werden. Im Rahmen dieser Dissertation stellen wir mehrere Entwicklungen vor, die dabei helfen sollen eine effiziente und vollständig automatisierte Analyse von komplexen und umfangreichen \glqq Bottom-up\grqq{}-Proteomikexperimenten zu ermöglichen. Um die hinderliche Komplexität diskreter, Bayes'scher Proteininferenzmethoden zu verringern, wird neuerdings von sogenannten Faltungsbäumen (engl. "convolution trees") Gebrauch gemacht. Diese bieten bis jetzt jedoch keine genaue und gleichzeitig numerisch stabile Möglichkeit um "max-product"-Inferenz zu betreiben. Deswegen wird in dieser Dissertation zunächst eine neue Methode beschrieben die das mithilfe eines stückweisen bzw. extrapolierendem Verfahren ermöglicht. Basierend auf der Integration dieser Methode in eine mitentwickelte Bibliothek für Bayes'sche Inferenz, wird dann ein OpenMS-Tool für Proteininferenz präsentiert. Dieses Tool ermöglicht effiziente Proteininferenz auf Basis eines diskreten Bayes'schen Netzwerks mithilfe eines "loopy belief propagation" Algorithmus'. Trotz der streng probabilistischen Formulierung des Problems übertrifft unser Verfahren die meisten etablierten Methoden in Recheneffizienz. Das Interface des Algorithmus' bietet außerdem einzigartige Eingabe- und Ausgabeoptionen, wie z.B. das Regularisieren der Anzahl von Proteinen in einer Gruppe, proteinspezifische "Priors", oder rekalibrierte "Posteriors" der Peptide. Schließlich zeigt diese Arbeit einen kompletten, einfach zu benutzenden, aber trotzdem skalierenden Workflow für Proteininferenz und -quantifizierung, welcher um das neue Tool entwickelt wurde. Die Pipeline wurde in nextflow implementiert und ist Teil einer Gruppe von standardisierten, regelmäßig getesteten und von einer Community gepflegten Standardworkflows gebündelt unter dem Projekt nf-core. Unser Workflow ist in der Lage selbst große Datensätze mit komplizierten experimentellen Designs zu prozessieren. Mit einem einzigen Befehl erlaubt er eine (Re-)Analyse von lokalen oder öffentlich verfügbaren Datensätzen mit kompetetiver Genauigkeit und ausgezeichneter Performance auf verschiedensten Hochleistungsrechenumgebungen oder der Cloud.Since the beginning of this millennium, the advent of high-throughput methods in numerous fields of the life sciences led to a shift in paradigms. A broad variety of technologies emerged that allow comprehensive quantification of molecules involved in biological processes. Simultaneously, a major increase in data volume has been recorded with these techniques through enhanced instrumentation and other technical advances. By supplying computational methods that automatically process raw data to obtain biological information, the field of bioinformatics plays an increasingly important role in the analysis of the ever-growing mass of data. Computational mass spectrometry in particular, is a bioinformatics field of research which provides means to gather, analyze and visualize data from high-throughput mass spectrometric experiments. For the study of the entirety of proteins in a cell or an environmental sample, even current techniques reach limitations that need to be circumvented by simplifying the samples subjected to the mass spectrometer. These pre-digested (so-called bottom-up) proteomics experiments then pose an even bigger computational burden during analysis since complex ambiguities need to be resolved during protein inference, grouping and quantification. In this thesis, we present several developments in the pursuit of our goal to provide means for a fully automated analysis of complex and large-scale bottom-up proteomics experiments. Firstly, due to prohibitive computational complexities in state-of-the-art Bayesian protein inference techniques, a refined, more stable technique for performing inference on sums of random variables was developed to enable a variation of standard Bayesian inference for the problem. nextflow and part of a set of standardized, well-tested, and community-maintained workflows by the nf-core collective. Our workflow runs on large-scale data with complex experimental designs and allows a one-command analysis of local and publicly available data sets with state-of-the-art accuracy on various high-performance computing environments or the cloud

    Remote Sensing Object Detection Meets Deep Learning: A Meta-review of Challenges and Advances

    Full text link
    Remote sensing object detection (RSOD), one of the most fundamental and challenging tasks in the remote sensing field, has received longstanding attention. In recent years, deep learning techniques have demonstrated robust feature representation capabilities and led to a big leap in the development of RSOD techniques. In this era of rapid technical evolution, this review aims to present a comprehensive review of the recent achievements in deep learning based RSOD methods. More than 300 papers are covered in this review. We identify five main challenges in RSOD, including multi-scale object detection, rotated object detection, weak object detection, tiny object detection, and object detection with limited supervision, and systematically review the corresponding methods developed in a hierarchical division manner. We also review the widely used benchmark datasets and evaluation metrics within the field of RSOD, as well as the application scenarios for RSOD. Future research directions are provided for further promoting the research in RSOD.Comment: Accepted with IEEE Geoscience and Remote Sensing Magazine. More than 300 papers relevant to the RSOD filed were reviewed in this surve

    5th International Conference on Advanced Research Methods and Analytics (CARMA 2023)

    Full text link
    Research methods in economics and social sciences are evolving with the increasing availability of Internet and Big Data sources of information. As these sources, methods, and applications become more interdisciplinary, the 5th International Conference on Advanced Research Methods and Analytics (CARMA) is a forum for researchers and practitioners to exchange ideas and advances on how emerging research methods and sources are applied to different fields of social sciences as well as to discuss current and future challenges.Martínez Torres, MDR.; Toral Marín, S. (2023). 5th International Conference on Advanced Research Methods and Analytics (CARMA 2023). Editorial Universitat Politècnica de València. https://doi.org/10.4995/CARMA2023.2023.1700
    corecore