111,323 research outputs found

    Orthogonal-Array based Design Methodology for Complex, Coupled Space Systems

    Get PDF
    The process of designing a complex system, formed by many elements and sub-elements interacting between each other, is usually completed at a system level and in the preliminary phases in two major steps: design-space exploration and optimization. In a classical approach, especially in a company environment, the two steps are usually performed together, by experts of the field inferring on major phenomena, making assumptions and doing some trial-and-error runs on the available mathematical models. To support designers and decision makers during the design phases of this kind of complex systems, and to enable early discovery of emergent behaviours arising from interactions between the various elements being designed, the authors implemented a parametric methodology for the design-space exploration and optimization. The parametric technique is based on the utilization of a particular type of matrix design of experiments, the orthogonal arrays. Through successive design iterations with orthogonal arrays, the optimal solution is reached with a reduced effort if compared to more computationally-intense techniques, providing sensitivity and robustness information. The paper describes the design methodology in detail providing an application example that is the design of a human mission to support a lunar base

    Covariate conscious approach for Gait recognition based upon Zernike moment invariants

    Full text link
    Gait recognition i.e. identification of an individual from his/her walking pattern is an emerging field. While existing gait recognition techniques perform satisfactorily in normal walking conditions, there performance tend to suffer drastically with variations in clothing and carrying conditions. In this work, we propose a novel covariate cognizant framework to deal with the presence of such covariates. We describe gait motion by forming a single 2D spatio-temporal template from video sequence, called Average Energy Silhouette image (AESI). Zernike moment invariants (ZMIs) are then computed to screen the parts of AESI infected with covariates. Following this, features are extracted from Spatial Distribution of Oriented Gradients (SDOGs) and novel Mean of Directional Pixels (MDPs) methods. The obtained features are fused together to form the final well-endowed feature set. Experimental evaluation of the proposed framework on three publicly available datasets i.e. CASIA dataset B, OU-ISIR Treadmill dataset B and USF Human-ID challenge dataset with recently published gait recognition approaches, prove its superior performance.Comment: 11 page

    Rapid Visual Categorization is not Guided by Early Salience-Based Selection

    Full text link
    The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.Comment: 22 pages, 9 figure

    Computer simulation of pulsed field gel runs allows the quantitation of radiation-induced double-strand breaks in yeast

    Get PDF
    A procedure for the quantification of double-strand breaks in yeast is presented that utilizes pulsed field gel electrophoresis (PFGE) and a comparison of the observed DNA mass distribution in the gel lanes with calculated distributions. Calculation of profiles is performed as follows. If double-strand breaks are produced by sparsely ionizing radiation, one can assume that they are distributed randomly in the genome, and the resulting DNA mass distribution in molecular length can be predicted by means of a random breakage model. The input data for the computation of molecular length profiles are the breakage frequency per unit length, , as adjustable parameter, and the molecular lengths of the intact chromosomes. The obtained DNA mass distributions in molecular length must then be transformed into distributions of DNA mass in migration distance. This requires a calibration of molecular length vs. migration distance that is specific for the gel lane in question. The computed profiles are then folded with a Lorentz distribution with adjusted spread parameter to account for and broadening. The DNA profiles are calculated for different breakage frequencies and for different values of , and the parameters resulting in the best fit of the calculated to the observed profile are determined

    A Robust Zero-Calibration RF-based Localization System for Realistic Environments

    Full text link
    Due to the noisy indoor radio propagation channel, Radio Frequency (RF)-based location determination systems usually require a tedious calibration phase to construct an RF fingerprint of the area of interest. This fingerprint varies with the used mobile device, changes of the transmit power of smart access points (APs), and dynamic changes in the environment; requiring re-calibration of the area of interest; which reduces the technology ease of use. In this paper, we present IncVoronoi: a novel system that can provide zero-calibration accurate RF-based indoor localization that works in realistic environments. The basic idea is that the relative relation between the received signal strength from two APs at a certain location reflects the relative distance from this location to the respective APs. Building on this, IncVoronoi incrementally reduces the user ambiguity region based on refining the Voronoi tessellation of the area of interest. IncVoronoi also includes a number of modules to efficiently run in realtime as well as to handle practical deployment issues including the noisy wireless environment, obstacles in the environment, heterogeneous devices hardware, and smart APs. We have deployed IncVoronoi on different Android phones using the iBeacons technology in a university campus. Evaluation of IncVoronoi with a side-by-side comparison with traditional fingerprinting techniques shows that it can achieve a consistent median accuracy of 2.8m under different scenarios with a low beacon density of one beacon every 44m2. Compared to fingerprinting techniques, whose accuracy degrades by at least 156%, this accuracy comes with no training overhead and is robust to the different user devices, different transmit powers, and over temporal changes in the environment. This highlights the promise of IncVoronoi as a next generation indoor localization system.Comment: 9 pages, 13 figures, published in SECON 201

    Semi-automatic selection of summary statistics for ABC model choice

    Full text link
    A central statistical goal is to choose between alternative explanatory models of data. In many modern applications, such as population genetics, it is not possible to apply standard methods based on evaluating the likelihood functions of the models, as these are numerically intractable. Approximate Bayesian computation (ABC) is a commonly used alternative for such situations. ABC simulates data x for many parameter values under each model, which is compared to the observed data xobs. More weight is placed on models under which S(x) is close to S(xobs), where S maps data to a vector of summary statistics. Previous work has shown the choice of S is crucial to the efficiency and accuracy of ABC. This paper provides a method to select good summary statistics for model choice. It uses a preliminary step, simulating many x values from all models and fitting regressions to this with the model as response. The resulting model weight estimators are used as S in an ABC analysis. Theoretical results are given to justify this as approximating low dimensional sufficient statistics. A substantive application is presented: choosing between competing coalescent models of demographic growth for Campylobacter jejuni in New Zealand using multi-locus sequence typing data
    • …
    corecore