1,807 research outputs found

    The kindest cut: Enhancing the user experience of mobile tv through adequate zooming

    Get PDF
    The growing market of Mobile TV requires automated adaptation of standard TV footage to small size displays. Especially extreme long shots (XLS) depicting distant objects can spoil the user experience, e.g. in soccer content. Automated zooming schemes can improve the visual experience if the resulting footage meets user expectations in terms of the visual detail and quality but does not omit valuable context information. Current zooming schemes are ignorant of beneficial zoom ranges for a given target size when applied to standard definition TV footage. In two experiments 84 participants were able to switch between original and zoom enhanced soccer footage at three sizes - from 320x240 (QVGA) down to 176x144 (QCIF). Eye tracking and subjective ratings showed that zoom factors between 1.14 and 1.33 were preferred for all sizes. Interviews revealed that a zoom factor of 1.6 was too high for QVGA content due to low perceived video quality, but beneficial for QCIF size. The optimal zoom depended on the target display size. We include a function to compute the optimal zoom for XLS depending on the target device size. It can be applied in automatic content adaptation schemes and should stimulate further research on the requirements of different shot types in video coding

    Integrating atomistic molecular dynamics simulations, experiments, and network analysis to study protein dynamics:strength in unity

    Get PDF
    In the last years, we have been observing remarkable improvements in the field of protein dynamics. Indeed, we can now study protein dynamics in atomistic details over several timescales with a rich portfolio of experimental and computational techniques. On one side, this provides us with the possibility to validate simulation methods and physical models against a broad range of experimental observables. On the other side, it also allows a complementary and comprehensive view on protein structure and dynamics. What is needed now is a better understanding of the link between the dynamic properties that we observe and the functional properties of these important cellular machines. To make progresses in this direction, we need to improve the physical models used to describe proteins and solvent in molecular dynamics, as well as to strengthen the integration of experiments and simulations to overcome their own limitations. Moreover, now that we have the means to study protein dynamics in great details, we need new tools to understand the information embedded in the protein ensembles and in their dynamic signature. With this aim in mind, we should enrich the current tools for analysis of biomolecular simulations with attention to the effects that can be propagated over long distances and are often associated to important biological functions. In this context, approaches inspired by network analysis can make an important contribution to the analysis of molecular dynamics simulations

    An algorithm for automated phase picking and localization of seismic events.

    Get PDF
    Durante gli ultimi anni, la diffusione della sismometria digitale ha reso disponibile una grande quantità di dati utili alla localizzazione di eventi sismici di diversa natura. L’analisi di queste informazioni richiede tuttavia una enorme mole di lavoro, per cui si è resa evidente la necessità di sviluppare metodi e procedure per individuare in maniera automatica gli arrivi P ed S sui sismogrammi. Se effettuata manualmente, la lettura dei tempi di arrivo (picking) può richiedere molto tempo ed essere affetta da errori sistematici a causa della soggettività nell’individuazione delle diverse fasi sismiche. Questo risulta essere particolarmente problematico nel momento in cui si vogliano confrontare dati provenienti da fonti diverse o addirittura gli stessi dati analizzati da diversi operatori, poiché i criteri usati per la scelta dei tempi saranno certamente diversi. Questi problemi chiaramente non si pongono se la scelta dei tempi di arrivo è automatizzata; inoltre, usando metodi automatici per il picking, è possibile localizzare più rapidamente un maggior numero di eventi. In questo lavoro di tesi è stato sviluppato un codice Matlab che, a partire dalle tracce sismiche, individua automaticamente gli arrivi P ed S e localizza ciascun evento utilizzando il metodo della ricerca su griglia. L’algoritmo si propone di essere versatile e veloce sia nella fase di individuazione dei tempi di arrivo che in quella di localizzazione. Dopo aver passato in rassegna alcune delle tecniche utilizzate per il picking sia delle onde P che delle onde S presenti nella letteratura, seguirà una descrizione dettagliata dell’algoritmo sviluppato; infine verrano presentati i risultati ottenuti testando l’algoritmo su dati reali

    Windowed Decoding of Protograph-based LDPC Convolutional Codes over Erasure Channels

    Full text link
    We consider a windowed decoding scheme for LDPC convolutional codes that is based on the belief-propagation (BP) algorithm. We discuss the advantages of this decoding scheme and identify certain characteristics of LDPC convolutional code ensembles that exhibit good performance with the windowed decoder. We will consider the performance of these ensembles and codes over erasure channels with and without memory. We show that the structure of LDPC convolutional code ensembles is suitable to obtain performance close to the theoretical limits over the memoryless erasure channel, both for the BP decoder and windowed decoding. However, the same structure imposes limitations on the performance over erasure channels with memory.Comment: 18 pages, 9 figures, accepted for publication in the IEEE Transactions on Information Theor

    An operant intra-/extra-dimensional set-shift task for mice

    Get PDF
    Alterations in executive control and cognitive flexibility, such as attentional set-shifting abilities, are core features of several neuropsychiatric diseases. The most widely used neuropsychological tests for the evaluation of attentional set-shifting in human subjects are the Wisconsin Card Sorting Test (WCST) and the CANTAB Intra-/Extra-dimensional set shift task (ID/ED). These tasks have proven clinical relevance and have been modified and successfully adapted for research in animal models. However, currently available tasks for rodents present several limitations, mainly due to their manual-based testing procedures, which are hampering translational advances in psychiatric medicine. To overcome these limitations and to better mimic the original version in primates, we present the development of a novel operant-based two- chamber ID/ED \u201cOperon\u201d task for rodents. We demonstrated the effectiveness of this novel task to measure different facets of cognitive flexibility in mice including attentional set formation and shifting, and reversal learning. Moreover, we show the high flexibility of this task in which three different perceptual dimensions can be manipulated with a high number of stimuli cues for each dimension. This novel ID/ED Operon task can be an effective preclinical tool for drug testing and/or large genetic screening relevant to the study of executive dysfunction and cognitive symptoms found in psychiatric disorders

    CAncer bioMarker Prediction Pipeline (CAMPP) - A standardized framework for the analysis of quantitative biological data

    Get PDF
    With the improvement of -omics and next-generation sequencing (NGS) methodologies, along with the lowered cost of generating these types of data, the analysis of high-throughput biological data has become standard both for forming and testing biomedical hypotheses. Our knowledge of how to normalize datasets to remove latent undesirable variances has grown extensively, making for standardized data that are easily compared between studies. Here we present the CAncer bioMarker Prediction Pipeline (CAMPP), an open-source R-based wrapper (https://github.com/ELELAB/CAncer-bioMarker-Prediction-Pipeline -CAMPP) intended to aid bioinformatic software-users with data analyses. CAMPP is called from a terminal command line and is supported by a user-friendly manual. The pipeline may be run on a local computer and requires little or no knowledge of programming. To avoid issues relating to R-package updates, a renv .lock file is provided to ensure R-package stability. Data-management includes missing value imputation, data normalization, and distributional checks. CAMPP performs (I) k-means clustering, (II) differential expression/abundance analysis, (III) elastic-net regression, (IV) correlation and co-expression network analyses, (V) survival analysis, and (VI) protein-protein/miRNA-gene interaction networks. The pipeline returns tabular files and graphical representations of the results. We hope that CAMPP will assist in streamlining bioinformatic analysis of quantitative biological data, whilst ensuring an appropriate bio-statistical framework

    Bi nanowires modified by 400 keV and 1 MeV Au ions

    Get PDF
    We report on the modification of the structure and morphology of Bi nanowires of two different diameters (80 or 130 nm) exposed to beams of 400 keV and 1 MeV Au+ until complete wire degradation. For fluences up to ∼1 ion/nm2 , the main effect was a slight roughening of the originally smooth surface and the appearance of a damaged zone at the wire edges. After an exposure to ∼2 ions/nm2 , shallow (∼5-7 nm deep) but wide (up to 120nm) depressions are seen, giving the wires a “wavy” morphology. At the largest fluence tested (10 ions/nm2 ), the thickest nanowires present an amorphized structure containing an embedded dispersion of small spherical metallic crystallites, while the thinner wires collapse into large (∼50nm) nanoparticles composed of a crystalline core surrounded by a disordered oxidized shell. The observed morphologic modifications are discussed considering sputtering and radiation induced surface diffusion effects

    Improved time diversity for LTE over satellite using split multicode transmission

    Full text link
    corecore