1,188 research outputs found

    Automatic summarization of rushes video using bipartite graphs

    Get PDF
    In this paper we present a new approach for automatic summarization of rushes video. Our approach is composed of three main steps. First, based on a temporal segmentation, we filter sub-shots with low information content not likely to be useful in a summary. Second, a method using maximal matching in a bipartite graph is adapted to measure similarity between the remaining shots and to minimize inter-shot redundancy by removing repetitive retake shots common in rushes content. Finally, the presence of faces and the motion intensity are characterised in each sub-shot. A measure of how representative the sub-shot is in the context of the overall video is then proposed. Video summaries composed of keyframe slideshows are then generated. In order to evaluate the effectiveness of this approach we re-run the evaluation carried out by the TREC, using the same dataset and evaluation metrics used in the TRECVID video summarization task in 2007 but with our own assessors. Results show that our approach leads to a significant improvement in terms of the fraction of the TRECVID summary ground truth included and is competitive with other approaches in TRECVID 2007

    Automatic summarization of rushes video using bipartite graphs

    Get PDF
    In this paper we present a new approach for automatic summarization of rushes, or unstructured video. Our approach is composed of three major steps. First, based on shot and sub-shot segmentations, we filter sub-shots with low information content not likely to be useful in a summary. Second, a method using maximal matching in a bipartite graph is adapted to measure similarity between the remaining shots and to minimize inter-shot redundancy by removing repetitive retake shots common in rushes video. Finally, the presence of faces and motion intensity are characterised in each sub-shot. A measure of how representative the sub-shot is in the context of the overall video is then proposed. Video summaries composed of keyframe slideshows are then generated. In order to evaluate the effectiveness of this approach we re-run the evaluation carried out by TRECVid, using the same dataset and evaluation metrics used in the TRECVid video summarization task in 2007 but with our own assessors. Results show that our approach leads to a significant improvement on our own work in terms of the fraction of the TRECVid summary ground truth included and is competitive with the best of other approaches in TRECVid 2007

    Using Sensor Metadata Streams to Identify Topics of Local Events in the City

    Get PDF
    In this paper, we study the emerging Information Retrieval (IR) task of local event retrieval using sensor metadata streams. Sensor metadata streams include information such as the crowd density from video processing, audio classifications, and social media activity. We propose to use these metadata streams to identify the topics of local events within a city, where each event topic corresponds to a set of terms representing a type of events such as a concert or a protest. We develop a supervised approach that is capable of mapping sensor metadata observations to an event topic. In addition to using a variety of sensor metadata observations about the current status of the environment as learning features, our approach incorporates additional background features to model cyclic event patterns. Through experimentation with data collected from two locations in a major Spanish city, we show that our approach markedly outperforms an alternative baseline. We also show that modelling background information improves event topic identification

    Paradigms, possibilities and probabilities: Comment on Hinterecker et al. (2016)

    Get PDF
    Hinterecker et al. (2016) compared the adequacy of the probabilistic new paradigm in reasoning with the recent revision of mental models theory (MMT) for explaining a novel class of inferences containing the modal term “possibly”. For example, the door is closed or the window is open or both, therefore, possibly the door is closed and the window is open (A or B or both, therefore, possibly(A & B)). They concluded that their results support MMT. In this comment, it is argued that Hinterecker et al. (2016) have not adequately characterised the theory of probabilistic validity (p-validity) on which the new paradigm depends. It is unclear how p-validity can be applied to these inferences, which are anyway peripheral to the theory. It is also argued that the revision of MMT is not well motivated and its adoption leads to many logical absurdities. Moreover, the comparison is not appropriate because these theories are defined at different levels of computational explanation. In particular, revised MMT lacks a provably consistent computational level theory that could justify treating these inferences as valid. It is further argued that the data could result from the non-colloquial locutions used to express the premises. Finally, an alternative pragmatic account is proposed based on the idea that a conclusion is possible if what someone knows cannot rule it out. This account could be applied to the unrevised mental model theory rendering the revision redundant

    Density functional study of the adsorption of K on the Ag(111) surface

    Full text link
    Full-potential gradient corrected density functional calculations of the adsorption of potassium on the Ag(111) surface have been performed. The considered structures are Ag(111) (root 3 x root 3) R30degree-K and Ag(111) (2 x 2)-K. For the lower coverage, fcc, hcp and bridge site; and for the higher coverage all considered sites are practically degenerate. Substrate rumpling is most important for the top adsorption site. The bond length is found to be nearly identical for the two coverages, in agreement with recent experiments. Results from Mulliken populations, bond lengths, core level shifts and work functions consistently indicate a small charge transfer from the potassium atom to the substrate, which is slightly larger for the lower coverage.Comment: to appear in Phys Rev

    LytR-CpsA-Psr proteins in Staphylococcus aureus display partial functional redundancy and the deletion of all three severely impairs septum placement and cell separation

    Get PDF
    Staphylococcus aureus contains three members of the LytR-CpsA-Psr (LCP) family of membrane proteins: MsrR, SA0908 and SA2103. The characterization of single-, double- and triple-deletion mutants revealed distinct phenotypes for each of the three proteins. MsrR was involved in cell separation and septum formation and influenced β-lactam resistance; SA0908 protected cells from autolysis; and SA2103, although displaying no apparent phenotype by itself, enhanced the properties of msrR and sa0908 mutants when deleted. The deletion of sa0908 and sa2103 also further attenuated the virulence of msrR mutants in a nematode-killing assay. The severely defective growth phenotype of the triple mutant revealed that LytR-CpsA-Psr proteins are essential for optimal cell division in S. aureus. Growth could be rescued to varying degrees by any one of the three proteins, indicating some functional redundancy within members of this protein family. However, differing phenotypic characteristics of all single and double mutants and complemented triple mutants indicated that each protein played a distinct role(s) and contributed differently to phenotypes influencing cell separation, autolysis, cell surface properties and virulenc

    Optimization methods and their use in low-energy electron-diffraction calculations

    Get PDF
    The speed of automatic optimization procedures used in surface structure determination by low-energy electron diffraction can be greatly enhanced by the use of linear approximations in the calculation of scattering amplitudes. It is shown how linear approximations can be used in the calculation of derivatives of intensities which are required in the least-squares optimization method. The derivatives with respect to structural and nonstructural parameters are calculated applying a combination of analytic and numerical methods in connection with approximations of the sum over lattice points in the angular momentum representation. Special cases for different structural and nonstructural parameters and simplifications for special geometries are discussed. The computational effort becomes nearly independent of the number of free parameters and enables the analysis of complex surface structures

    Spin and Rotations in Galois Field Quantum Mechanics

    Full text link
    We discuss the properties of Galois Field Quantum Mechanics constructed on a vector space over the finite Galois field GF(q). In particular, we look at 2-level systems analogous to spin, and discuss how SO(3) rotations could be embodied in such a system. We also consider two-particle `spin' correlations and show that the Clauser-Horne-Shimony-Holt (CHSH) inequality is nonetheless not violated in this model.Comment: 21 pages, 11 pdf figures, LaTeX. Uses iopart.cls. Revised introduction. Additional reference

    ScotGrid: Providing an Effective Distributed Tier-2 in the LHC Era

    Get PDF
    ScotGrid is a distributed Tier-2 centre in the UK with sites in Durham, Edinburgh and Glasgow. ScotGrid has undergone a huge expansion in hardware in anticipation of the LHC and now provides more than 4MSI2K and 500TB to the LHC VOs. Scaling up to this level of provision has brought many challenges to the Tier-2 and we show in this paper how we have adopted new methods of organising the centres, from fabric management and monitoring to remote management of sites to management and operational procedures, to meet these challenges. We describe how we have coped with different operational models at the sites, where Glagsow and Durham sites are managed "in house" but resources at Edinburgh are managed as a central university resource. This required the adoption of a different fabric management model at Edinburgh and a special engagement with the cluster managers. Challenges arose from the different job models of local and grid submission that required special attention to resolve. We show how ScotGrid has successfully provided an infrastructure for ATLAS and LHCb Monte Carlo production. Special attention has been paid to ensuring that user analysis functions efficiently, which has required optimisation of local storage and networking to cope with the demands of user analysis. Finally, although these Tier-2 resources are pledged to the whole VO, we have established close links with our local physics user communities as being the best way to ensure that the Tier-2 functions effectively as a part of the LHC grid computing framework..Comment: Preprint for 17th International Conference on Computing in High Energy and Nuclear Physics, 7 pages, 1 figur
    corecore