134 research outputs found

    EXPERIENCING HERITAGE DYNAMIC THROUGH VISUALIZATION

    Get PDF
    Abstract. The present article aims to consider the added value attached to the usage of new technologies in a project aimed to study heritage. Indeed, multimedia devices could be used to create representations useful to develop and disseminate information integrating the architectural and territorial framework and to reach a general understanding.The processed data come from a research project, based on an interdisciplinary approach, address to the study of medieval buildings in Armenia, Vayots Dzor region, with the aim of studying and understanding the cultural heritage.Three different technologies are used to visualize and disseminate the results of the analyses carried out: the video, the hologram, and the virtual reality. These digital visualization methods enable experts to make the topics investigated accessible and comprehensible to a wider general public with a didactic and informative aim.The solid 3D-model of the site allows to virtually reproduce the reality and to provide a spatial perception of the site. Indeed, it is a neutral base, represents the morphological conformation and settlements, a landscape whose reference points are easily identified with the historical architectures, helping the public and spectators to get oriented inside the territory. These methods of representation allow to move from general view to particular, or to a different frame appropriate to the addressed topic. Thus, it binds the scientific research with the visual part, and enable communication, even in a context where it is difficult to use a common spoken or written language.</p

    Diversity and dynamics of seaweed associated microbial communities inhabiting the lagoon of venice

    Get PDF
    Seaweeds are a group of essential photosynthetic organisms that harbor a rich diversity of associated microbial communities with substantial functions related to host health and defense. Environmental and anthropogenic stressors may disrupt the microbial communities and their metabolic activity, leading to host physiological alterations that negatively affect seaweeds’ performance and survival. Here, the bacterial communities associated with one of the most common seaweed, Ulva laetevirens Areshough, were sampled over a year at three sites of the lagoon of Venice affected by different environmental and anthropogenic stressors. Bacterial communities were characterized through Illumina sequencing of the V4 hypervariable region of 16S rRNA genes. The study demonstrated that the seaweed associated bacterial communities at sites impacted by environmental stressors were host-specific and differed significantly from the less affected site. Furthermore, these communities were significantly distinct from those of the surrounding seawater. The bacterial communities’ composition was significantly correlated with environmental parameters (nutrient concentrations, dissolved oxygen saturation, and pH) across sites. This study showed that several more abundant bacteria on U. laetevirens at stressed sites belonged to taxa related to the host response to the stressors. Overall, environmental parameters and anthropogenic stressors were shown to substantially affect seaweed associated bacterial communities, which reflect the host response to environmental variations

    Genomic comparison of lactobacillus helveticusstrains highlights probiotic potential

    Get PDF
    Lactobacillus helveticus belongs to the large group of lactic acid bacteria (LAB), which are the major players in the fermentation of a wide range of foods. LAB are also present in the human gut, which has often been exploited as a reservoir of potential novel probiotic strains, but several parameters need to be assessed before establishing their safety and potential use for human consumption. In the present study, six L. helveticus strains isolated from natural whey cultures were analyzed for their phenotype and genotype in exopolysaccharide (EPS) production, low pH and bile salt tolerance, bile salt hydrolase (BSH) activity, and antibiotic resistance profile. In addition, a comparative genomic investigation was performed between the six newly sequenced strains and the 51 publicly available genomes of L. helveticus to define the pangenome structure. The results indicate that the newly sequenced strain UC1267 and the deposited strain DSM 20075 can be considered good candidates for gut-adapted strains due to their ability to survive in the presence of 0.2% glycocholic acid (GCA) and 1% taurocholic and taurodeoxycholic acid (TDCA). Moreover, these strains had the highest bile salt deconjugation activity among the tested L. helveticus strains. Considering the safety profile, none of these strains presented antibiotic resistance phenotypically and/or at the genome level. The pangenome analysis revealed genes specific to the new isolates, such as enzymes related to folate biosynthesis in strains UC1266 and UC1267 and an integrated phage in strain UC1035. Finally, the presence of maltose-degrading enzymes and multiple copies of 6-phospho-beta-glucosidase genes in our strains indicates the capability to metabolize sugars other than lactose, which is related solely to dairy niches

    gam genomic assemblies merger

    Get PDF
    Motivations. In the last 3 years more than 20 assemblers have been proposed to tackle the hard task of assembling. Recent evaluation efforts (Assemblathon 1 and GAGE) demonstrated that none of these tools clearly outperforms the others. However, results clearly show that some assemblers performs better than others on specific regions and statistics while poorly performing on other regions and evaluation measures. With this picture in mind we developed GAM (Genomic Assemblies Merger) whose primary goal is to merge two or more assemblies in order to obtain a more contiguous one. Moreover, as a by-product of the merging step, GAM is able to correct mis-assemblies. GAM does not need global alignment between contigs, making it unique among others Assembly Reconciliation tools. In this way a computationally expensive alignment is avoided, and paralog sequences (likely to create false connection among contigs) do not represent a problem. GAM procedure is based only on the information coming from reads used in the assembling phases, and it can be used even on assemblies obtained with different datasets. Methods. Let us concentrate on the the merging of two assemblies, dubbed M and S. As a preprocessing step, that is an almost mandatory analysis, reads (or a subset of them) used in the assembling phase are aligned against M and S using a SAM-compatible aligner (e.g., BWA, rNA). GAM takes as input M, S and the two SAM files produced in the preprocessing step. The main idea is to identify fragments belonging to M and S having high similarity. For this purpose, GAM identifies regions, named blocks, belonging to M and S that share an high enough amount of reads (i.e. regions sharing the same aligned reads). After all blocks are identified the Assembly Graph (AG) is built: each node corresponds to a block and a directed edge connects block A to block B if the first precedes the second in either M or S (see Fig.1). Once AG is available, the merging phase can start. As a first step GAM identifies genomic regions in which assemblies contradict each other (loops, bifurcations, etc.). These areas represent potential inconsistencies between the two sequences. We chose to be as much conservative as possible electing (for example) M to be the Master assembly: all its contigs are supposed to be correct and cannot be contradicted. S becomes the Slave and everywhere an inconsistency is found, M is preferred to S. After the identification and the resolution of problematic regions, GAM visits the simplified graph, merges contigs accordingly to blocks and edges in AG (each merging phase is performed using a Smith-Waterman algorithm variant) and finally outputs the new improved assembly. GAM is not only limited to contigs, it can also work with scaffolds, filling the N's inserted by an assembler and not by the other. Results. GAM has been tested on several real datasets, in particular on Olea's chloroplast (241X Illumina paired reads and 21X 454 paired reads), Populus trichocarpa (82X Illumina paired reads), boa constrictor (40X Illumina paired reads). Illumina reads have average length of 100 bp and insert size of 500 bp. All tests have been performed on a computer equipped with 8 cores and 32GB RAM. ABySS and CLC were selected as assemblers. Results are summarized in Fig. 1. Olea's chloroplast has been used as a proof of concept experiment. The presence of a reference sequence allowed GAM's output validation (using dnadiff). Two assemblies were obtained with CLC using Illumina and 454 data. GAM was used to merge them. Figure 1 shows how GAM assembly is not only more contiguous but also more correct: while Master (CLC-Illumina) and Slave (CLC-454) have 58 and 39 suspicious regions respectively, GAM has only 14 of those. On Populus trichocarpa and Boa constrictor, CLC assemblies were used as master due to their better contiguity. In both cases assemblies returned by GAM were more contiguous (see Fig. 1)

    University of Kentucky Measurements of Wind, Temperature, Pressure and Humidity in Support of LAPSE-RATE Using Multisite Fixed-Wing and Rotorcraft Unmanned Aerial Systems

    Get PDF
    In July 2018, unmanned aerial systems (UASs) were deployed to measure the properties of the lower atmosphere within the San Luis Valley, an elevated valley in Colorado, USA, as part of the Lower Atmospheric Profiling Studies at Elevation – a Remotely-piloted Aircraft Team Experiment (LAPSE-RATE). Measurement objectives included detailing boundary layer transition, canyon cold-air drainage and convection initiation within the valley. Details of the contribution to LAPSE-RATE made by the University of Kentucky are provided here, which include measurements by seven different fixed-wing and rotorcraft UASs totaling over 178 flights with validated data. The data from these coordinated UAS flights consist of thermodynamic and kinematic variables (air temperature, humidity, pressure, wind speed and direction) and include vertical profiles up to 900 m above the ground level and horizontal transects up to 1500 m in length. These measurements have been quality controlled and are openly available in the Zenodo LAPSE-RATE community data repository (https://zenodo.org/communities/lapse-rate/, last access: 23 July 2020), with the University of Kentucky data available at https://doi.org/10.5281/zenodo.3701845 (Bailey et al., 2020)

    SuRankCo: supervised ranking of contigs in de novo assemblies

    Get PDF
    Background: Evaluating the quality and reliability of a de novo assembly and of single contigs in particular is challenging since commonly a ground truth is not readily available and numerous factors may influence results. Currently available procedures provide assembly scores but lack a comparative quality ranking of contigs within an assembly. Results: We present SuRankCo, which relies on a machine learning approach to predict quality scores for contigs and to enable the ranking of contigs within an assembly. The result is a sorted contig set which allows selective contig usage in downstream analysis. Benchmarking on datasets with known ground truth shows promising sensitivity and specificity and favorable comparison to existing methodology. Conclusions: SuRankCo analyzes the reliability of de novo assemblies on the contig level and thereby allows quality control and ranking prior to further downstream and validation experiments

    Feature-by-Feature – Evaluating De Novo Sequence Assembly

    Get PDF
    The whole-genome sequence assembly (WGSA) problem is among one of the most studied problems in computational biology. Despite the availability of a plethora of tools (i.e., assemblers), all claiming to have solved the WGSA problem, little has been done to systematically compare their accuracy and power. Traditional methods rely on standard metrics and read simulation: while on the one hand, metrics like N50 and number of contigs focus only on size without proportionately emphasizing the information about the correctness of the assembly, comparisons performed on simulated dataset, on the other hand, can be highly biased by the non-realistic assumptions in the underlying read generator. Recently the Feature Response Curve (FRC) method was proposed to assess the overall assembly quality and correctness: FRC transparently captures the trade-offs between contigs' quality against their sizes. Nevertheless, the relationship among the different features and their relative importance remains unknown. In particular, FRC cannot account for the correlation among the different features. We analyzed the correlation among different features in order to better describe their relationships and their importance in gauging assembly quality and correctness. In particular, using multivariate techniques like principal and independent component analysis we were able to estimate the “excess-dimensionality” of the feature space. Moreover, principal component analysis allowed us to show how poorly the acclaimed N50 metric describes the assembly quality. Applying independent component analysis we identified a subset of features that better describe the assemblers performances. We demonstrated that by focusing on a reduced set of highly informative features we can use the FRC curve to better describe and compare the performances of different assemblers. Moreover, as a by-product of our analysis, we discovered how often evaluation based on simulated data, obtained with state of the art simulators, lead to not-so-realistic results
    • …
    corecore