1,106 research outputs found

    PUMA criterion = MODE criterion

    Full text link
    We show that the recently proposed (enhanced) PUMA estimator for array processing minimizes the same criterion function as the well-established MODE estimator. (PUMA = principal-singular-vector utilization for modal analysis, MODE = method of direction estimation.

    A New Tool for CME Arrival Time Prediction using Machine Learning Algorithms: CAT-PUMA

    Get PDF
    Coronal mass ejections (CMEs) are arguably the most violent eruptions in the solar system. CMEs can cause severe disturbances in interplanetary space and can even affect human activities in many aspects, causing damage to infrastructure and loss of revenue. Fast and accurate prediction of CME arrival time is vital to minimize the disruption that CMEs may cause when interacting with geospace. In this paper, we propose a new approach for partial-/full halo CME Arrival Time Prediction Using Machine learning Algorithms (CAT-PUMA). Via detailed analysis of the CME features and solar-wind parameters, we build a prediction engine taking advantage of 182 previously observed geo-effective partial-/full halo CMEs and using algorithms of the Support Vector Machine. We demonstrate that CAT-PUMA is accurate and fast. In particular, predictions made after applying CAT-PUMA to a test set unknown to the engine show a mean absolute prediction error of ∼5.9 hr within the CME arrival time, with 54% of the predictions having absolute errors less than 5.9 hr. Comparisons with other models reveal that CAT-PUMA has a more accurate prediction for 77% of the events investigated that can be carried out very quickly, i.e., within minutes of providing the necessary input parameters of a CME. A practical guide containing the CAT-PUMA engine and the source code of two examples are available in the Appendix, allowing the community to perform their own applications for prediction using CAT-PUMA

    Partial Relaxation Approach: An Eigenvalue-Based DOA Estimator Framework

    Full text link
    In this paper, the partial relaxation approach is introduced and applied to DOA estimation using spectral search. Unlike existing methods like Capon or MUSIC which can be considered as single source approximations of multi-source estimation criteria, the proposed approach accounts for the existence of multiple sources. At each considered direction, the manifold structure of the remaining interfering signals impinging on the sensor array is relaxed, which results in closed form estimates for the interference parameters. The conventional multidimensional optimization problem reduces, thanks to this relaxation, to a simple spectral search. Following this principle, we propose estimators based on the Deterministic Maximum Likelihood, Weighted Subspace Fitting and covariance fitting methods. To calculate the pseudo-spectra efficiently, an iterative rooting scheme based on the rational function approximation is applied to the partial relaxation methods. Simulation results show that the performance of the proposed estimators is superior to the conventional methods especially in the case of low Signal-to-Noise-Ratio and low number of snapshots, irrespectively of any specific structure of the sensor array while maintaining a comparable computational cost as MUSIC.Comment: This work has been submitted to IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Get PDF
    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried out reliably with such existing capabilities and (3) the currently unavailable modeling capabilities that should receive high priority for near-term research and development. It should be emphasized that the study is concerned only with the class of 'fast time' analytical and simulation models. 'Real time' models, that typically involve humans-in-the-loop, comprise another extensive class which is not addressed in this report. However, the relationship between some of the fast-time models reviewed and a few well-known real-time models is identified in several parts of this report and the potential benefits from the combined use of these two classes of models-a very important subject-are discussed in chapters 4 and 7

    Prospects for detecting gravitational waves at 5 Hz with ground-based detectors

    Get PDF
    We propose an upgrade to Advanced LIGO (aLIGO), named LIGO-LF, that focuses on improving the sensitivity in the 5-30 Hz low-frequency band, and we explore the upgrade's astrophysical applications. We present a comprehensive study of the detector's technical noises and show that with technologies currently under development, such as interferometrically sensed seismometers and balanced-homodyne readout, LIGO-LF can reach the fundamental limits set by quantum and thermal noises down to 5 Hz. These technologies are also directly applicable to the future generation of detectors. We go on to consider this upgrade's implications for the astrophysical output of an aLIGO-like detector. A single LIGO-LF can detect mergers of stellar-mass black holes (BHs) out to a redshift of z~6 and would be sensitive to intermediate-mass black holes up to 2000 M_\odot. The detection rate of merging BHs will increase by a factor of 18 compared to aLIGO. Additionally, for a given source the chirp mass and total mass can be constrained 2 times better than aLIGO and the effective spin 3-5 times better than aLIGO. Furthermore, LIGO-LF enables the localization of coalescing binary neutron stars with an uncertainty solid angle 10 times smaller than that of aLIGO at 30 Hz, and 4 times smaller when the entire signal is used. LIGO-LF also significantly enhances the probability of detecting other astrophysical phenomena including the tidal excitation of neutron star r-modes and the gravitational memory effects.Comment: 5 pages, 6 figures, published in PR

    Stabilization and control of teleoperation systems with time delays

    Get PDF
    A control scheme for teleoperation systems with time delay is developed based on the concept of passivity. This control method requires neither detailed knowledge of the manipulator systems nor the mathematical models of the environments, and it is applicable for any time delays. The main contribution of this method is that it is less conservative than the traditional passivity based method. In this method, the passivity controller only operates when the system loses passivity, while in a traditional passivity formulation, the controller works at all times during operation and thus adversely affect the performance of the system.;Using the proposed control scheme, a sub-system is defined that is composed of the communication channel, slave robot and the manipulated environment. This sub system is treated as a one-port network component, and passivity theory is applied to this component to assure stability. The energy flowing into the one-port network, in the form of the control command and the force feedback, is monitored. A passivity regulator is activated to maintain the passivity of the network by modifying the feedback force to the master, and thus adjust the energy exchange between the master and the communication channel.;When this method is applied, only the information at the interface between the master manipulator and the communication channel is collected and observed, there is no need for accurate or detailed knowledge of the structure or timing of the communication channel. The method can make the system lossless regardless of the feedback force, the coordinating force controlling the slave joint motions or the contact force. The approach can stabilize the system regardless of the time delay, discontinuities with environmental contact, or discretization of the physical plant. It will pose no problem when the environmental contact force is directly fed back. The results of this work show that it is advantageous to use the measured environmental force as the feedback, providing superior performance for free motion and more realistic haptic feedback for the operator from the remote environment.;Simulation and experimental results are presented to verify the proposed control scheme

    Synthesizing Population for Travel Activity Analysis

    Get PDF
    Population synthesis is a fundamental procedure for individual-based modeling in transportation research. The population synthesis generates anonymized individuals with selected social-demographic variables that have similar statistical distributions as that of the samples from the real population. Previous studies on population synthesis focused on generating general-purpose population by fitting the joint distributions of multiple variables to their sampled distributions. In addition to fitting the joint distributions, this study focuses on generating population for travel activity analysis by considering individuals’ travel activity patterns and associated social, economic, and demographic characteristics. A person’s daily movement is a time-sequence of activities connected by travel behaviors. It can be described as vectors that include important transportation attributes such as travel distance, travel mode, activity type, activity time, and activity sequence. A multidimensional pattern vector method is used in this study to represent an individual’s daily travel activities. This method is based on the combination of time-geography, sequence alignment, and pattern vector. Using the 2001 and 2009 National Household Travel Survey (NHTS), the travel distance and activity sequence of individuals are normalized, compared, and integrated into a dissimilarity matrix. Major travel activity patterns are then examined by cluster analysis. The random forest model is applied to examine the prominent socio-demographic characteristics that correlate to the activity patterns. The prominent socio-demographic characteristics are then used to synthesize population microdata. Since the algorithm complexity of population synthesis grows exponentially with the number of attributes, the methodology used in this study can effectively reduce the computational intensity by focusing on the most important variables for travel activity analysis. This study also addresses another issue in traditional population synthesis algorithms, i.e., the probability distributions at the individual and household levels cannot be fitted simultaneously. In this study, Iterative Proportional Fitting (IPF) algorithm is used to consider the distributions at different scales and to generate synthetic population microdata with the prominent socio-demographic characteristics. The performance of the algorithm that generates synthesized population is evaluated by scatter plot and Normalized Root Mean Square Error (NRMSE) analysis. In addition, the distributions of socio-demographic attributes in the synthesized data are compared with that of variables in the observed sample dataset. The verification result indicates that the new method can produce a better population microdata. This dissertation describes how to generate a synthetic population for Milwaukee County, WI with prominent socio-demographic variables for travel activity analysis. By critically selecting the prominent socio-demographic factors, the computational intensity of population synthesis is reduced. It is also found that, by aggregating the IPF-generated weights of individuals and using them to the household level, the overall goodness-of-fit can be managed at a reasonable level and the distributions of socio-demographic factors at the individual and household levels can be fitted
    • …
    corecore