913 research outputs found

    Analysis and Modelling of TTL ice crystals based on in-situ light scattering patterns

    Get PDF
    Even though there are numerous studies on cirrus clouds and its influence on climate, lack of detailed information on its microphysical properties like ice crystal geometry, still exists. Challenges like instrumental limitations and scarcity of observational data could be the reasons behind it. But this knowledge gap has only heightened the error in climate model predictions. Therefore, this study is focused on the Tropical Tropopause Layer (TTL), where cirrus clouds can be seen, and the temperature bias is higher. Since the shape and surface geometry of ice crystals greatly influence the temperature, a detailed understanding of these ice crystals is necessary. So, this paper will look in-depth on finding the morphology of different types of ice crystals in the TTL. The primary objective of this research is to analyse the scattering patterns of ice crystals in the TTL cirrus and find their characteristics like shape and size distributions. As cirrus is a high cloud, it plays a crucial role in the Earth-atmosphere radiation balance and by knowing the scattering properties of ice crystals, their impact on the radiative balance can be estimated. This research further helps to broaden the understanding of the general scattering properties of TTL ice crystals, to support climate modelling and contribute towards more accurate climate prediction. An investigation into the light scattering data is presented. The data consist of 2D scattering patterns of ice crystals of size 1-100μm captured by the Aerosol Ice Interface Transition Spectrometer (AIITS) between the scattering angles 6° and 25° at the wavelength of 532nm. The images were taken during the NERC and NASA Co-ordinated Airborne Studies in the Tropics and Airborne Tropical Tropopause Experiment (known as the CAST-ATTREX campaign) on 5th March 2015 at an altitude between 15-16km over the Eastern Pacific. The features in the scattering patterns are analysed to identify the crystal habit, as they vary with the geometry of the crystal. After the analysis phase, the model crystals of specific types and sizes are generated using an appropriate computer program. The scattering data of the model crystals are then simulated using a Beam Tracing Model (BTM) based on physical optics, as geometric optics doesn’t produce the required information and exact methods (like T-matrix or Discrete Dipole Approximation) are either unsuitable for large size parameters or time-consuming. The simulated scattering pattern of a model crystal is then compared against that of the AIITS to find the characteristics like shape, surface texture and size of the ice crystals. By successive testing and further analysis, the crystal sizes are estimated. Since the manual analysis of scattering patterns is time-consuming, a pilot study on Deep Learning Network has been undertaken to classify the scattering patterns. Previous studies have shown that there are high concentrations of small ice crystals in TTL cirrus. However, these crystals, especially <30μm, are often misclassified due to the limited resolution of the imaging instruments, or even considered as shattered ice. Through this research it was possible to explore both the crystal habit and its surface texture with greater accuracy as the scattering patterns captured by the AIITS are analysed instead of crystal images. It was found that most of the crystals are quasi-spheroidal in shape and that there is indeed an abundance of smaller crystals <30μm. It was also found that over a quarter of the crystal population has rough surfaces

    Functional characterization of the Ustilago maydis effector genes UMAG_11060 and UMAG_05306

    Get PDF
    Ustilago maydis causes corn smut and triggers tumor formation in all aerial parts of maize. To adapt to the host plant and promote disease progression, U. maydis uses effector proteins that exhibit organ-specific expression and adaptation during infection. This study focuses on two of these effectors, UMAG_11060 and UMAG_05306. This study characterizes UMAG_11060 (Chapter 2), which encodes the effector protein TOPLESS (TPL) interacting protein 6 (Tip6). The study shows that Tip6 interacts with the N-terminal region of ZmTPL2 through its two EAR (ethylene-responsive element binding factor-associated amphiphilic repression) motifs. These motifs are crucial for virulence function and alter the nuclear distribution pattern of ZmTPL2, disrupting host transcriptional regulation. This disruption leads to the down-regulation of 13 transcription factors in the AP2/ERF B1 subfamily. This study proposes a regulatory mechanism in which Tip6 uses repressive domains to recruit the corepressor ZmTPL2, thereby disrupting the transcriptional networks of the host plant. The second part of the thesis focuses on the characterization of UMAG_05306 (Chapter 3), which exhibits highly specific subcellular localization and appears as thick and twisted filament-like structures. The study shows that UMAG_05306 interacts with four maize dynamin related proteins (DRPs) and is able to interact with both the N- terminal and C-terminal of ZmDRP5. Three DRPs are found to interact with maize tubulin. Furthermore, UMAG_05306 directly interacts with tubulin. These findings shed light on their potential roles in U. maydis infection. In conclusion, this study provides insight into the molecular mechanisms underlying U. maydis infection and reveals the importance of UMAG_11060 and UMAG_05306 effectors for virulence and tumor formation

    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)

    Get PDF
    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!

    CLIM4OMICS: a geospatially comprehensive climate and multi-OMICS database for maize phenotype predictability in the United States and Canada

    Get PDF
    The performance of numerical, statistical, and data-driven diagnostic and predictive crop production modeling relies heavily on data quality for input and calibration or validation processes. This study presents a comprehensive database and the analytics used to consolidate it as a homogeneous, consistent, multidimensional genotype, phenotypic, and environmental database for maize phenotype modeling, diagnostics, and prediction. The data used are obtained from the Genomes to Fields (G2F) initiative, which provides multiyear genomic (G), environmental (E), and phenotypic (P) datasets that can be used to train and test crop growth models to understand the genotype by environment (GxE) interaction phenomenon. A particular advantage of the G2F database is its diverse set of maize genotype DNA sequences (G2F-G), phenotypic measurements (G2F-P), station-based environmental time series (mainly climatic data) observations collected during the maize-growing season (G2F-E), and metadata for each field trial (G2F-M) across the United States (US), the province of Ontario in Canada, and the state of Lower Saxony in Germany. The construction of this comprehensive climate and genomic database incorporates the analytics for data quality control (QC) and consistency control (CC) to consolidate the digital representation of geospatially distributed environmental and genomic data required for phenotype predictive analytics and modeling of the GxE interaction. The two-phase QC–CC preprocessing algorithm also includes a module to estimate environmental uncertainties. Generally, this data pipeline collects raw files, checks their formats, corrects data structures, and identifies and cures or imputes missing data. This pipeline uses machine-learning techniques to fill the environmental time series gaps, quantifies the uncertainty introduced by using other data sources for gap imputation in G2F-E, discards the missing values in G2F-P, and removes rare variants in G2F-G. Finally, an integrated and enhanced multidimensional database was generated. The analytics for improving the G2F database and the improved database called Climate for OMICS (CLIM4OMICS) follow findability, accessibility, interoperability, and reusability (FAIR) principles, and all data and codes are available at https://doi.org/10.5281/zenodo.8002909 (Aslam et al., 2023a) and https://doi.org/10.5281/zenodo.8161662 (Aslam et al., 2023b), respectively.</p

    Reconocimiento de tendencias en un campo de investigación en publicaciones científicas y su clasificación a los objetivos de desarrollo sostenible aplicando técnicas de procesamiento de lenguaje natural

    Get PDF
    Los centros de investigación y las universidades, al ser generadores de conocimiento, experimentan la imperiosa necesidad de someter su producción científica a un análisis riguroso a fin de detectar y evaluar su influencia. Asimismo, resulta relevante que dichas entidades sean capaces de identificar la correspondencia entre su producción científica y las metas o políticas a nivel nacional e internacional, dado que esto se erige como un factor crucial para reconocer su aporte y relevancia. Adicionalmente, como parte de las actividades científicas que permitan la planificación estratégica y la toma de decisiones para el personal académico, los formuladores de políticas y los financiadores, estas entidades podrían apoyarse del análisis masivo de productos académicos, como artículos científicos y tesis, para detectar tendencias de investigación. La disciplina de ciencia de datos se enfoca en la gestión de datos masivos para convertirla en conocimiento mediante técnicas de Inteligencia Artificial. Dentro de este marco, técnicas de Procesamiento del Lenguaje Natural, como la clasificación de texto y el topic modeling, se utilizan para el análisis y aprendizaje del lenguaje. En el ámbito académico, el análisis automatizado de la producción científica mediante la aplicación de metodologías de ciencia de datos puede ayudar a reconocer la alineación con políticas científicas y generar estrategias de innovación. En artículos científicos, la clasificación de texto permite identificar su alineación con políticas, como las relacionadas con el desarrollo sostenible, mientras que el topic modeling identifica tendencias en tópicos científicos fomentando procesos de innovación. La revisión de la literatura realizada en esta tesis pone de manifiesto que las tareas de clasificación de texto y el topic modeling pueden implementarse con diferentes arquitecturas y técnicas de Machine Learning. El estado de la técnica plantea el uso de Modelos de Lenguaje de Gran Escala (Large Language Models, LLM) para alcanzar niveles muy altos de desempeño, sin embargo, se requiere de conocimiento más especializado y de grandes recursos de cómputo. Los modelos de clasificación y topic modeling clásicos podrían ser una alternativa, sin embargo, existen discrepancias en resultados con datasets de productos científicos. Aunque existen algunos desarrollos metodológicos específicos para la clasificación de texto, no existen estudios consistentes que consideren de forma explícita el desempeño con datasets de artículos científicos con etiquetas de los objetivos de desarrollo sostenible desbalanceadas. Para el topic moldeing es necesario identificar si los modelos clásicos en comparación con los LLM, aun son de un desempeño razonablemente efectivos en artículos científicos con sólo título y resumen como el texto principal para crear los datasets. En este contexto se proponen dos frameworks, uno para comparar modelos de clasificación de texto con etiquetas múltiples cuyos algoritmos y técnicas requieren limitada infraestructura de cómputo y el segundo, para comparar modelos que descubren tópicos científicos (sus propensiones y nuevas temáticas). Ambos frameworks contienen criterios en donde los conjuntos de datos de artículos científicos son procesados de tal manera que impacten directamente en el desempeño de los modelos. Los resultados en clasificación de texto multi-etiqueta permiten reconocer una correspondencia entre la calidad de los datos (mediante el preprocesamiento), el algoritmo de clasificación base y el método de transformación multi-etiqueta, que afecta el desempeño de los modelos. La comparativa de modelos para el topic modeling, logra identificar que el mejor resultado se obtiene del modelo basado en el LLM, que tiene la capacidad de aprovechar la información contextual y semántica del texto de entrada al utilizar un modelo pre-entrenado de BERT

    New Computational Methods for Automated Large-Scale Archaeological Site Detection

    Get PDF
    Aquesta tesi doctoral presenta una sèrie d'enfocaments, fluxos de treball i models innovadors en el camp de l'arqueologia computacional per a la detecció automatitzada a gran escala de jaciments arqueològics. S'introdueixen nous conceptes, enfocaments i estratègies, com ara lidar multitemporal, aprenentatge automàtic híbrid, refinament, curriculum learning i blob analysis; així com diferents mètodes d'augment de dades aplicats per primera vegada en el camp de l'arqueologia. S'utilitzen múltiples fonts, com ara imatges de satèl·lits multiespectrals, fotografies RGB de plataformes VANT, mapes històrics i diverses combinacions de sensors, dades i fonts. Els mètodes creats durant el desenvolupament d'aquest doctorat s'han avaluat en projectes en curs: Urbanització a Hispània i la Gàl·lia Mediterrània en el primer mil·lenni aC, detecció de monticles funeraris utilitzant algorismes d'aprenentatge automàtic al nord-oest de la Península Ibèrica, prospecció arqueològica intel·ligent basada en drons (DIASur), i cartografiat del patrimoni arqueològic al sud d'Àsia (MAHSA), per a la qual s'han dissenyat fluxos de treball adaptats als reptes específics del projecte. Aquests nous mètodes han aconseguit proporcionar solucions als problemes comuns d'estudis arqueològics presents en estudis similars, com la baixa precisió en detecció i les poques dades d'entrenament. Els mètodes validats i presentats com a part de la tesi doctoral s'han publicat en accés obert amb el codi disponible perquè puguin implementar-se en altres estudis arqueològics.Esta tesis doctoral presenta una serie de enfoques, flujos de trabajo y modelos innovadores en el campo de la arqueología computacional para la detección automatizada a gran escala de yacimientos arqueológicos. Se introducen nuevos conceptos, enfoques y estrategias, como lidar multitemporal, aprendizaje automático híbrido, refinamiento, curriculum learning y blob analysis; así como diferentes métodos de aumento de datos aplicados por primera vez en el campo de la arqueología. Se utilizan múltiples fuentes, como lidar, imágenes satelitales multiespectrales, fotografías RGB de plataformas VANT, mapas históricos y varias combinaciones de sensores, datos y fuentes. Los métodos creados durante el desarrollo de este doctorado han sido evaluados en proyectos en curso: Urbanización en Iberia y la Galia Mediterránea en el Primer Milenio a. C., Detección de túmulos mediante algoritmos de aprendizaje automático en el Noroeste de la Península Ibérica, Prospección Arqueológica Inteligente basada en Drones (DIASur), y cartografiado del Patrimonio del Sur de Asia (MAHSA), para los que se han diseñado flujos de trabajo adaptados a los retos específicos del proyecto. Estos nuevos métodos han logrado proporcionar soluciones a problemas comunes de la prospección arqueológica presentes en estudios similares, como la baja precisión en detección y los pocos datos de entrenamiento. Los métodos validados y presentados como parte de la tesis doctoral se han publicado en acceso abierto con su código disponible para que puedan implementarse en otros estudios arqueológicos.This doctoral thesis presents a series of innovative approaches, workflows and models in the field of computational archaeology for the automated large-scale detection of archaeological sites. New concepts, approaches and strategies are introduced such as multitemporal lidar, hybrid machine learning, refinement, curriculum learning and blob analysis; as well as different data augmentation methods applied for the first time in the field of archaeology. Multiple sources are used, such as lidar, multispectral satellite imagery, RGB photographs from UAV platform, historical maps, and several combinations of sensors, data, and sources. The methods created during the development of this PhD have been evaluated in ongoing projects: Urbanization in Iberia and Mediterranean Gaul in the First Millennium BC, Detection of burial mounds using machine learning algorithms in the Northwest of the Iberian Peninsula, Drone-based Intelligent Archaeological Survey (DIASur), and Mapping Archaeological Heritage in South Asia (MAHSA), for which workflows adapted to the project’ s specific challenges have been designed. These new methods have managed to provide solutions to common archaeological survey problems, presented in similar large-scale site detection studies, such as the low precision in previous detection studies and how to handle problems with few training data. The validated approaches for site detection presented as part of the PhD have been published as open access papers with freely available code so can be implemented in other archaeological studies

    Bayazid Abad (Bayazi Awa): Transition of Material Patterns from the Middle Bronze to the Iron Age in North-Western Iran

    Get PDF
    This study focuses on the tomb of Bayazid Abad, located in North-Western Iran near Hasanlu. The tomb contains artifacts from the Middle Bronze Age, Late Bronze Age, and Iron Age I and II, providing valuable insights into the material culture of the region during those periods. The findings from Bayazid Abad are analyzed alongside those from Hasanlu and Dinkha to understand the broader cultural context. The dating and recognition of the Bronze Age and Iron Age in North-Western Iran were initially based on excavations at Hasanlu. However, the understanding of the site's stratigraphy and dating has evolved over time, with Michael Danti's comprehensive study being particularly significant. This dissertation aligns with Danti's chronology, which encompasses architecture, pottery, seals, beads, weapons, and other artifacts from Bayazid Abad, given the strong connection between the two sites. By studying the burial goods from Bayazid Abad, this research aims to expand our knowledge of the material culture in North-Western Iran from the second to the first millennium BC. The primary objectives include determining the cultural period(s) represented by Bayazid Abad, exploring the connections between the tomb and neighboring sites such as Hasanlu and Dinkha, and enhancing the existing database of North-Western Iranian material culture. While previous studies focused mainly on pottery, Bayazid Abad offers the opportunity to investigate other aspects of material culture beyond ceramics. The tomb's significant collection of artifacts can provide valuable insights into various cultural practices and traditions of the region

    ACARORUM CATALOGUS IX. Acariformes, Acaridida, Schizoglyphoidea (Schizoglyphidae), Histiostomatoidea (Histiostomatidae, Guanolichidae), Canestrinioidea (Canestriniidae, Chetochelacaridae, Lophonotacaridae, Heterocoptidae), Hemisarcoptoidea (Chaetodactylidae, Hyadesiidae, Algophagidae, Hemisarcoptidae, Carpoglyphidae, Winterschmidtiidae)

    Get PDF
    The 9th volume of the series Acarorum Catalogus contains lists of mites of 13 families, 225 genera and 1268 species of the superfamilies Schizoglyphoidea, Histiostomatoidea, Canestrinioidea and Hemisarcoptoidea. Most of these mites live on insects or other animals (as parasites, phoretic or commensals), some inhabit rotten plant material, dung or fungi. Mites of the families Chetochelacaridae and Lophonotacaridae are specialised to live with Myriapods (Diplopoda). The peculiar aquatic or intertidal mites of the families Hyadesidae and Algophagidae are also included.Publishe

    2023- The Twenty-seventh Annual Symposium of Student Scholars

    Get PDF
    The full program book from the Twenty-seventh Annual Symposium of Student Scholars, held on April 18-21, 2023. Includes abstracts from the presentations and posters.https://digitalcommons.kennesaw.edu/sssprograms/1027/thumbnail.jp
    • …
    corecore