3,280 research outputs found

    Experiments in fault tolerant software reliability

    Get PDF
    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated

    Sparse multiple relay selection for network beamforming with individual power constraints using semidefinite relaxation

    Get PDF
    This paper deals with the multiple relay selection problem in two-hop wireless cooperative networks with individual power constraints at the relays. In particular, it addresses the problem of selecting the best subset of K cooperative nodes and their corresponding beamforming weights so that the signal-to-noise ratio (SNR) is maximized at the destination. This problem is computationally demanding and requires an exhaustive search over all the possible combinations. In order to reduce the complexity, a new suboptimal method is proposed. This technique exhibits a near-optimal performance with a computational burden that is far less than the one needed in the combinatorial search. The proposed method is based on the use of the l1-norm squared and the Charnes-Cooper transformation and naturally leads to a semidefinite programming relaxation with an affordable computational cost. Contrary to other approaches in the literature, the technique exposed herein is based on the knowledge of the second-order statistics of the channels and the relays are not limited to cooperate with full power.Peer ReviewedPostprint (author's final draft

    Assessing the Health of Richibucto Estuary with the Latent Health Factor Index

    Get PDF
    The ability to quantitatively assess the health of an ecosystem is often of great interest to those tasked with monitoring and conserving ecosystems. For decades, research in this area has relied upon multimetric indices of various forms. Although indices may be numbers, many are constructed based on procedures that are highly qualitative in nature, thus limiting the quantitative rigour of the practical interpretations made from these indices. The statistical modelling approach to construct the latent health factor index (LHFI) was recently developed to express ecological data, collected to construct conventional multimetric health indices, in a rigorous quantitative model that integrates qualitative features of ecosystem health and preconceived ecological relationships among such features. This hierarchical modelling approach allows (a) statistical inference of health for observed sites and (b) prediction of health for unobserved sites, all accompanied by formal uncertainty statements. Thus far, the LHFI approach has been demonstrated and validated on freshwater ecosystems. The goal of this paper is to adapt this approach to modelling estuarine ecosystem health, particularly that of the previously unassessed system in Richibucto in New Brunswick, Canada. Field data correspond to biotic health metrics that constitute the AZTI marine biotic index (AMBI) and abiotic predictors preconceived to influence biota. We also briefly discuss related LHFI research involving additional metrics that form the infaunal trophic index (ITI). Our paper is the first to construct a scientifically sensible model to rigorously identify the collective explanatory capacity of salinity, distance downstream, channel depth, and silt-clay content --- all regarded a priori as qualitatively important abiotic drivers --- towards site health in the Richibucto ecosystem.Comment: On 2013-05-01, a revised version of this article was accepted for publication in PLoS One. See Journal reference and DOI belo

    A Data-Driven Approach for Tag Refinement and Localization in Web Videos

    Get PDF
    Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g. using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select on the fly from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results.Comment: Preprint submitted to Computer Vision and Image Understanding (CVIU
    • …
    corecore