2,460,132 research outputs found

    Online Learning for Ground Trajectory Prediction

    Get PDF
    This paper presents a model based on an hybrid system to numerically simulate the climbing phase of an aircraft. This model is then used within a trajectory prediction tool. Finally, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization algorithm is used to tune five selected parameters, and thus improve the accuracy of the model. Incorporated within a trajectory prediction tool, this model can be used to derive the order of magnitude of the prediction error over time, and thus the domain of validity of the trajectory prediction. A first validation experiment of the proposed model is based on the errors along time for a one-time trajectory prediction at the take off of the flight with respect to the default values of the theoretical BADA model. This experiment, assuming complete information, also shows the limit of the model. A second experiment part presents an on-line trajectory prediction, in which the prediction is continuously updated based on the current aircraft position. This approach raises several issues, for which improvements of the basic model are proposed, and the resulting trajectory prediction tool shows statistically significantly more accurate results than those of the default model.Comment: SESAR 2nd Innovation Days (2012

    Verifying Real-Time Systems using Explicit-time Description Methods

    Get PDF
    Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick) to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency

    Evaluating search and matching models using experimental data

    Get PDF
    This paper introduces an innovative test of search and matching models using the exogenous variation available in experimental data. We take an off-the-shelf Pissarides matching model and calibrate it to data on the control group from a randomized social experiment. We then simulate a program group from a randomized experiment within the model. As a measure of the performance of the model, we compare the outcomes of the program groups from the model and from the randomized experiment. We illustrate our methodology using the Canadian Self-Sufficiency Project (SSP), a social experiment providing a time limited earnings supplement for Income Assistance recipients who obtain full time employment within a 12 month period. We find two features of the model are consistent with the experimental results: endogenous search intensity and exogenous job destruction. We find mixed evidence in support of the assumption of fixed hours of labor supply. Finally, we find a constant job destruction rate is not consistent with the experimental data in this context

    Using nonparametrics to specify a model to measure the value of travel time

    Get PDF
    Using a range of nonparametric methods, the paper examines the specification of a model to evaluate the willingness-to-pay (WTP) for travel time changes from binomial choice data from a simple time-cost trading experiment. The analysis favours a model with random WTP as the only source of randomness over a model with fixed WTP which is linear in time and cost and has an additive random error term. Results further indicate that the distribution of log WTP can be described as a sum of a linear index fixing the location of the log WTP distribution and an independent random variable representing unobserved heterogeneity. This formulation is useful for parametric modelling. The index indicates that the WTP varies systematically with income and other individual characteristics. The WTP varies also with the time difference presented in the experiment which is in contradiction of standard utility theory.Willingness-to-pay; WTP; value of time; nonparametric; semiparametric; local logit

    Applying Deep Bidirectional LSTM and Mixture Density Network for Basketball Trajectory Prediction

    Full text link
    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories

    Trend-resistant and cost-efficient cross-over designs for mixed models.

    Get PDF
    A mixed model approach is used to construct optimal cross-over designs. In a cross-over experiment the same subject is tested at different points in time. Consider as an example an experiment to investigate the influence of physical attributes of the work environment such as luminance, ambient temperature and relative humidity on human performance of acceptance inspection in quality assurance. In a mixed model context, the subject effects are assumed to be independent and normally distributed. Besides the induction of correlated observations within the same inspector, the mixed model approach also enables one to specify the covariance structure of the inspection data. Here, several covariance structures are considered either depending on the time variable or not. Unfortunately, a serious drawback of the inspection experiment is that the results may be influenced by an unknown time trend because of inspector fatigue due to monotony of the inspection task. In other circumstances, time trend effects can be caused by learning effects of the test subjects in behavioural and life sciences, heating or aging of material in prototype experiments, etc. An algorithm is presented to construct cross-over designs that are optimally balanced for time trend effects. The costs for using the subjects and for altering the factor levels between consecutive observations can also be taken into account. A number of examples illustrate utility of the outlined design methodology.Optimal; Models; Model;

    Shear-driven size segregation of granular materials: modeling and experiment

    Full text link
    Granular materials segregate by size under shear, and the ability to quantitatively predict the time required to achieve complete segregation is a key test of our understanding of the segregation process. In this paper, we apply the Gray-Thornton model of segregation (developed for linear shear profiles) to a granular flow with an exponential profile, and evaluate its ability to describe the observed segregation dynamics. Our experiment is conducted in an annular Couette cell with a moving lower boundary. The granular material is initially prepared in an unstable configuration with a layer of small particles above a layer of large particles. Under shear, the sample mixes and then re-segregates so that the large particles are located in the top half of the system in the final state. During this segregation process, we measure the velocity profile and use the resulting exponential fit as input parameters to the model. To make a direct comparison between the continuum model and the observed segregation dynamics, we locally map the measured height of the experimental sample (which indicates the degree of segregation) to the local packing density. We observe that the model successfully captures the presence of a fast mixing process and relatively slower re-segregation process, but the model predicts a finite re-segregation time, while in the experiment re-segregation occurs only exponentially in time

    D'yakonov-Perel' spin relaxation in InSb/AlInSb quantum wells

    Full text link
    We investigate theoretically the D'yakonov-Perel' spin relaxation time by solving the eight-band Kane model and Poisson equation self-consistently. Our results show distinct behavior with the single-band model due to the anomalous spin-orbit interactions in narrow band-gap semiconductors, and agree well with the experiment values reported in recent experiment (K. L. Litvinenko, et al., New J. Phys. \textbf{8}, 49 (2006)). We find a strong resonant enhancement of the spin relaxation time appears for spin align along [11ˉ01\bar{1}0] at a certain electron density at 4 K. This resonant peak is smeared out with increasing the temperature.Comment: 4 pages, 4 figure
    corecore