2,573 research outputs found

    Mean Square Capacity of Power Constrained Fading Channels with Causal Encoders and Decoders

    Full text link
    This paper is concerned with the mean square stabilization problem of discrete-time LTI systems over a power constrained fading channel. Different from existing research works, the channel considered in this paper suffers from both fading and additive noises. We allow any form of causal channel encoders/decoders, unlike linear encoders/decoders commonly studied in the literature. Sufficient conditions and necessary conditions for the mean square stabilizability are given in terms of channel parameters such as transmission power and fading and additive noise statistics in relation to the unstable eigenvalues of the open-loop system matrix. The corresponding mean square capacity of the power constrained fading channel under causal encoders/decoders is given. It is proved that this mean square capacity is smaller than the corresponding Shannon channel capacity. In the end, numerical examples are presented, which demonstrate that the causal encoders/decoders render less restrictive stabilizability conditions than those under linear encoders/decoders studied in the existing works.Comment: Accepted by the 54th IEEE Conference on Decision and Contro

    Multi Stage based Time Series Analysis of User Activity on Touch Sensitive Surfaces in Highly Noise Susceptible Environments

    Full text link
    This article proposes a multistage framework for time series analysis of user activity on touch sensitive surfaces in noisy environments. Here multiple methods are put together in multi stage framework; including moving average, moving median, linear regression, kernel density estimation, partial differential equations and Kalman filter. The proposed three stage filter consisting of partial differential equation based denoising, Kalman filter and moving average method provides ~25% better noise reduction than other methods according to Mean Squared Error (MSE) criterion in highly noise susceptible environments. Apart from synthetic data, we also obtained real world data like hand writing, finger/stylus drags etc. on touch screens in the presence of high noise such as unauthorized charger noise or display noise and validated our algorithms. Furthermore, the proposed algorithm performs qualitatively better than the existing solutions for touch panels of the high end hand held devices available in the consumer electronics market qualitatively.Comment: 9 pages (including 9 figures and 3 tables); International Journal of Computer Applications (published

    Testing the equal-time angular-averaged consistency relation of the gravitational dynamics in N-body simulations

    Full text link
    We explicitly test the equal-time consistency relation between the angular-averaged bispectrum and the power spectrum of the matter density field, employing a large suite of cosmological NN-body simulations. This is the lowest-order version of the relations between (+n)(\ell+n)-point and nn-point polyspectra, where one averages over the angles of \ell soft modes. This relation depends on two wave numbers, kk' in the soft domain and kk in the hard domain. We show that it holds up to a good accuracy, when k/k1k'/k\ll 1 and kk' is in the linear regime, while the hard mode kk goes from linear (0.1hMpc10.1\,h\mathrm{Mpc}^{-1}) to nonlinear (1.0hMpc11.0\,h\mathrm{Mpc}^{-1}) scales. On scales k0.4hMpc1k\lesssim 0.4\,h\mathrm{Mpc}^{-1}, we confirm the relation within the statistical error of the simulations (typically a few percent depending on the wave number), even though the bispectrum can already deviate from leading-order perturbation theory by more than 30%30\%. We further examine the relation on smaller scales with higher resolution simulations. We find that the relation holds within the statistical error of the simulations at z=1z=1, whereas we find deviations as large as 7%\sim 7\% at k1.0hMpc1k \sim 1.0\,h\mathrm{Mpc}^{-1} at z=0.35z=0.35. We show that this can be explained partly by the breakdown of the approximation Ωm/f21\Omega_\mathrm{m}/f^2\simeq1 with supplemental simulations done in the Einstein-de Sitter background cosmology. We also estimate the impact of this approximation on the power spectrum and bispectrum.Comment: 14 pages, 15 figures, added Sec. III E and Appendixes, matched to PRD published versio

    Integration of Time-lapse Seismic Data Using the Onset Time Approach: the Impact of Seismic Survey Frequency

    Get PDF
    The seismic inversion method using the seismic onset times has shown great promise for integrating frequent seismic surveys for updating geologic models. However, due to the high cost of seismic surveys, frequent seismic surveys are not commonly available. In this study, we focus on analyzing the impact of seismic survey frequency on the onset time approach, aiming to extend the advantages of onset time approach when infrequent seismic surveys are available. To analyze the impact of seismic survey frequency on the onset time approach, first, we conduct a sensitivity analysis based on the frequent seismic survey data (over 175 surveys) of steam injection in a heavy oil reservoir (Peace River Unit) in Canada. The calculated onset time maps based on seismic survey data sampled at various intervals from the frequent data sets are compared to examine the need and effectiveness of interpolation between surveys. Additionally, we compared the onset time inversion with traditional seismic amplitude inversion and quantitatively investigate the nonlinearity and robustness for these two inversion methods. The sensitivity analysis shows that using interpolation between seismic surveys to calculate the onset time, an adequate onset time map can be extracted from the infrequent seismic surveys. This holds good as long as there are no changes in the underlying physical mechanisms during the interpolation period. It is concluded that the linear interpolation is more efficient and robust than the Lagrange interpolation. A 2D waterflooding case demonstrates the necessity of interpolation for resolving into the large time span between the seismic surveys and obtaining more accurate model update and more efficient misfit reduction. The Brugge benchmark case shows that the onset time inversion method obtains comparable permeability update as the traditional seismic amplitude inversion method while being much more efficient. This results from the significant data reduction achieved by integrating a single onset time map rather than multiple sets of amplitude maps. The onset time approach also achieves superior convergence performance, resulting from its quasi-linear properties. It is found that the nonlinearity of the onset time method is smaller than that of the amplitude inversion method by several orders of magnitude

    Integrating and Ranking Uncertain Scientific Data

    Get PDF
    Mediator-based data integration systems resolve exploratory queries by joining data elements across sources. In the presence of uncertainties, such multiple expansions can quickly lead to spurious connections and incorrect results. The BioRank project investigates formalisms for modeling uncertainty during scientific data integration and for ranking uncertain query results. Our motivating application is protein function prediction. In this paper we show that: (i) explicit modeling of uncertainties as probabilities increases our ability to predict less-known or previously unknown functions (though it does not improve predicting the well-known). This suggests that probabilistic uncertainty models offer utility for scientific knowledge discovery; (ii) small perturbations in the input probabilities tend to produce only minor changes in the quality of our result rankings. This suggests that our methods are robust against slight variations in the way uncertainties are transformed into probabilities; and (iii) several techniques allow us to evaluate our probabilistic rankings efficiently. This suggests that probabilistic query evaluation is not as hard for real-world problems as theory indicates

    Biomechanical Modeling for Lung Tumor Motion Prediction during Brachytherapy and Radiotherapy

    Get PDF
    A novel technique is proposed to develop a biomechanical model for estimating lung’s tumor position as a function of respiration cycle time. Continuous tumor motion is a major challenge in lung cancer treatment techniques where the tumor needs to be targeted; e.g. in external beam radiotherapy and brachytherapy. If not accounted for, this motion leads to areas of radiation over and/or under dosage for normal tissue and tumors. In this thesis, biomechanical models were developed for lung tumor motion predication in two distinct cases of lung brachytherapy and lung external beam radiotherapy. The lung and other relevant surrounding organs geometries, loading, boundary conditions and mechanical properties were considered and incorporated properly for each case. While using material model with constant incompressibility is sufficient to model the lung tissue in the brachytherapy case, in external beam radiation therapy the tissue incompressibility varies significantly due to normal breathing. One of the main issues tackled in this research is characterizing lung tissue incompressibility variations and measuring its corresponding parameters as a function of respiration cycle time. Results obtained from an ex-vivo porcine deflated lung indicated feasibility and reliability of using the developed biomechanical model to predict tumor motion during brachytherapy. For external beam radiotherapy, in-silico studies indicated very significant impact of considering the lung tissue incompressibility on the accuracy of predicting tumor motion. Furthermore, ex-vivo porcine lung experiments demonstrated the capability and reliability of the proposed approach for predicting tumor motion as a function of cyclic time. As such, the proposed models have a good potential to be incorporated effectively in computer assisted lung radiotherapy treatment systems

    Opening Autonomous Airspace–a Prologue

    Get PDF
    The proliferation of Unmanned Aerial Vehicles (UAV), and in particular small Unmanned Aerial Systems (sUAS), has significant operational implications for the Air Traffic Control (ATC) system of the future. Integrating unmanned aircraft safely presents long-standing challenges, especially during the lengthy transition period when unmanned vehicles will be mixed with piloted vehicles. Integration of dissimilar systems is not an easy, straight-forward task and in this case is complicated by the difficulty to truly know what is present in the airspace. Additionally, there are significant technology, security and liability issues that will need resolution to ensure property and life are protected and in loss, indemnified. The future of air traffic will be a fully networked environment, where the absence of participation on the network could connote a potential intruder and threat. This article explores a potential airspace structure, and conceptual air traffic management philosophy of self-separation that is inclusive of all participants. Additionally, the article acknowledges the significant cyber security, technological, societal trust, employment, policy, and liability implications of transition to a fully autonomous air transportation system. Each subject is described at a macro, operations analysis level verses a more detailed systems engineering level. The objective and potential value of such a treatment is to encourage industry dialog about possibilities and more importantly a focus toward workable future air traffic solutions

    A Novel Optimization towards Higher Reliability in Predictive Modelling towards Code Reusability

    Get PDF
    Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
    corecore