3,673,112 research outputs found

    Multivariate Analysis, Retrieval, and Storage System (MARS). Volume 1: MARS System and Analysis Techniques

    Get PDF
    A method for rapidly examining the probable applicability of weight estimating formulae to a specific aerospace vehicle design is presented. The Multivariate Analysis Retrieval and Storage System (MARS) is comprised of three computer programs which sequentially operate on the weight and geometry characteristics of past aerospace vehicles designs. Weight and geometric characteristics are stored in a set of data bases which are fully computerized. Additional data bases are readily added to the MARS system and/or the existing data bases may be easily expanded to include additional vehicles or vehicle characteristics

    Large System Analysis of Power Normalization Techniques in Massive MIMO

    Get PDF
    Linear precoding has been widely studied in the context of Massive multiple-input-multiple-output (MIMO) together with two common power normalization techniques, namely, matrix normalization (MN) and vector normalization (VN). Despite this, their effect on the performance of Massive MIMO systems has not been thoroughly studied yet. The aim of this paper is to fulfill this gap by using large system analysis. Considering a system model that accounts for channel estimation, pilot contamination, arbitrary pathloss, and per-user channel correlation, we compute tight approximations for the signal-to-interference-plus-noise ratio and the rate of each user equipment in the system while employing maximum ratio transmission (MRT), zero forcing (ZF), and regularized ZF precoding under both MN and VN techniques. Such approximations are used to analytically reveal how the choice of power normalization affects the performance of MRT and ZF under uncorrelated fading channels. It turns out that ZF with VN resembles a sum rate maximizer while it provides a notion of fairness under MN. Numerical results are used to validate the accuracy of the asymptotic analysis and to show that in Massive MIMO, non-coherent interference and noise, rather than pilot contamination, are often the major limiting factors of the considered precoding schemes.Comment: 12 pages, 3 figures, Accepted for publication in the IEEE Transactions on Vehicular Technolog

    Computer program uses Monte Carlo techniques for statistical system performance analysis

    Get PDF
    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling

    Analysis of measurement and simulation errors in structural system identification by observability techniques

    Get PDF
    This is the peer reviewed version of the following article: [Lei, J., Lozano-Galant, J. A., Nogal, M., Xu, D., and Turmo, J. (2017) Analysis of measurement and simulation errors in structural system identification by observability techniques. Struct. Control Health Monit., 24: . doi: 10.1002/stc.1923.], which has been published in final form at http://onlinelibrary.wiley.com/wol1/doi/10.1002/stc.1923/full. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.During the process of structural system identification, errors are unavoidable. This paper analyzes the effects of measurement and simulation errors in structural system identification based on observability techniques. To illustrate the symbolic approach of this method a simply supported beam is analyzed step-by-step. This analysis provides, for the very first time in the literature, the parametric equations of the estimated parameters. The effects of several factors, such as errors in a particular measurement or in the whole measurement set, load location, measurement location or sign of the errors, on the accuracy of the identification results are also investigated. It is found that error in a particular measurement increases the errors of individual estimations, and this effect can be significantly mitigated by introducing random errors in the whole measurement set. The propagation of simulation errors when using observability techniques is illustrated by two structures with different measurement sets and loading cases. A fluctuation of the observed parameters around the real values is proved to be a characteristic of this method. Also, it is suggested that a sufficient combination of different load cases should be utilized to avoid the inaccurate estimation at the location of low curvature zones.Peer ReviewedPostprint (author's final draft

    Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis

    Get PDF
    In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed

    Model-based dependability analysis : state-of-the-art, challenges and future outlook

    Get PDF
    Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis

    Information extraction from multimedia web documents: an open-source platform and testbed

    No full text
    The LivingKnowledge project aimed to enhance the current state of the art in search, retrieval and knowledge management on the web by advancing the use of sentiment and opinion analysis within multimedia applications. To achieve this aim, a diverse set of novel and complementary analysis techniques have been integrated into a single, but extensible software platform on which such applications can be built. The platform combines state-of-the-art techniques for extracting facts, opinions and sentiment from multimedia documents, and unlike earlier platforms, it exploits both visual and textual techniques to support multimedia information retrieval. Foreseeing the usefulness of this software in the wider community, the platform has been made generally available as an open-source project. This paper describes the platform design, gives an overview of the analysis algorithms integrated into the system and describes two applications that utilise the system for multimedia information retrieval

    Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study

    Get PDF
    Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically "learn" models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? In this work, we investigate existing algorithms for learning probabilistic models for model checking, propose an evolution-based approach for better controlling the degree of generalization and conduct an empirical study in order to answer the questions. One of our findings is that the effectiveness of learning may sometimes be limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP
    corecore