4,537 research outputs found

    Coherent population trapping with polarization modulation

    Full text link
    Coherent population trapping (CPT) is extensively studied for future vapor cell clocks of high frequency stability. In the constructive polarization modulation CPT scheme, a bichromatic laser field with polarization and phase synchronously modulated is applied on an atomic medium. A high contrast CPT signal is observed in this so-called double-modulation configuration, due to the fact that the atomic population does not leak to the extreme Zeeman states, and that the two CPT dark states, which are produced successively by the alternate polarizations, add constructively. Here we experimentally investigate CPT signal dynamics first in the usual configuration, a single circular polarization. The double-modulation scheme is then addressed in both cases: one pulse Rabi interaction and two pulses Ramsey interaction. The impact and the optimization of the experimental parameters involved in the time sequence are reviewed. We show that a simple sevenlevel model explains the experimental observations. The double-modulation scheme yields a high contrast similar to the one of other high contrast configurations like push-pull optical pumping or crossed linear polarization scheme, with a setup allowing a higher compactness. The constructive polarization modulation is attractive for atomic clock, atomic magnetometer and high precision spectroscopy applications.Comment: 13 pages, 15 figures. To be published in Journ. Appl. Phys.(2016

    FAST: Feature-Aware Student Knowledge Tracing

    Get PDF
    Various kinds of e-learning systems, such as Massively Open Online Courses and intelligent tutoring systems, are now producing amounts of feature-rich data from students solving items at different levels of proficiency over time. To analyze such data, researchers often use Knowledge Tracing [4], a 20-year old method that has become the de-facto standard for inferring student’s knowledge from performance data. Knowledge Tracing uses Hidden Markov Models (HMM) to estimate the latent cognitive state (student’s knowledge) from the student’s performance answering items. Since the original Knowledge Tracing formulation does not allow to model general features, a considerable amount of research has focused on ad-hoc modifications to the Knowledge Tracing algorithm to enable modeling a specific feature of interest. This has led to a plethora of different Knowledge Tracing reformulations for very specific purposes. For example, Pardos et al. [5] proposed a new model to measure the effect of students’ individual characteristics, Beck et al. [2] modified Knowledge Tracing to assess the effect of help in a tutor system, and Xu and Mostow [7] proposed a new model that allows measuring the effect of subskills. These ad hoc models are successful for their own specific purpose, but they do not generalize to arbitrary features. Other student modeling methods which allow more flexible features have been proposed. For example, Performance Factor Analysis [6] uses logistic regression to model arbitrary features, but unfortunately it does not make inferences of whether the student has learned a skill. We present FAST (Feature-Aware Student knowledge Tracing), a novel method that allows general features into Knowledge Tracing. FAST combines Performance Factor Analysis (logistic regression) with Knowledge Tracing, by leveraging on previous work on unsupervised learning with features [3]. Therefore, FAST is able to infer student’s knowledge, like Knowledge Tracing does, while also allowing for arbitrary features, like Performance Factor Analysis does. FAST allows general features into Knowledge Tracing by replacing the generative emission probabilities (often called guess and slip probabilities) with logistic regression [3], so that these probabilities can change with time to infer student’s knowledge. FAST allows arbitrary features to train the logistic regression model and the HMM jointly. Training the parameters simultaneously enables FAST to learn from the features. This differs from using regression to analyze the slip and guess probabilities [1]. To validate our approach, we use data collected from real students interacting with a tutor. We present experimental results comparing FAST with Knowledge Tracing and Performance Factor Analysis. We conduct experiments with our model using features like item difficulty, prior successes and failures of a student for the skill (or multiple skills) associated with the item, according to the formulation of Performance Factor Analysis

    Ecological User Equilibrium in Traffic Management (TM)?

    Get PDF
    With increasing environmental sustainability awareness significant attention on ecological traffic management (eco-TM) has come into the focus of researchers and practitioners. While different approaches have been applied to reach minimal pollutant production, the classic user equilibrium calculation with the pollutant production as travel costs instead of using travel times remains in the center of attention. However, the validity of such a direct transformation to find a user equilibrium is questionable. In this paper, a simplified analytical approach to examine the above aforementioned validity has been carried out, followed by a simulation approach to verify the results of the analytical approach. The result shows that the pollutant production function violates the usual assumption of a monotonous function (typically, emission has a minimum at travel speeds around 60 km/h). It also indicates that the respective algorithms to compute the user equilibrium must deal with the fact, that the equilibrium solution is not unique and is dependent on the initial solution. This means that substantial modifications to the algorithms that compute the user equilibrium have to be discussed since they do not work as intended when pollutant production is used as travel costs, especially in a transportation system with mixed speeds that cover a range around the minimum emission speed

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Travel time expenditures and travel time budgets - Preliminary findings

    Get PDF
    There has been discussion now for four decades on the issue of whether or not people around the world have a constant traveltime budget. Most of the research into travel-time budgets has used large aggregate data sets and has shown that average amounts of time spent travelling are on the order of 1 to 1œ hours. There have also been a number of studies that have failed to find evidence of constancy in travel-time budgets. In this paper, the authors report on some preliminary research that uses data from a panel of 50 households that provided GPS data for a period of up to 28 days. In the research to date, the analysis deals only with evidence from one wave of the panel, to determine whether there is evidence over a period of one week of stability in travel-time expenditures. The data set provides very precise times of travel for each person for up to28 consecutive days of travel. The analysis looks at travel time expenditure on a daily basis per person and then aggregates this to a week. The issue of regression to the mean is also considered and reviewed and conclusions are drawn that it is not an issue in this analysis. Evidence is found of some stability in travel time expenditures, especially when data are averaged over a two-week period

    Correcting the Bias in the Practitioner Black-Scholes Method

    Get PDF
    We address a number of technical problems with the popular Practitioner Black-Scholes (PBS) method for valuing options. The method amounts to a two-stage procedure in which fitted values of implied volatilities (IV) from a linear regression are plugged into the Black-Scholes formula to obtain predicted option prices. Firstly we ensure that the prediction from stage one is positive by using log-linear regression. Secondly, we correct the bias (see Christoffersen and Jacobs, 2004, p.298) that results from the transformation applied to the fitted values (i.e. the Black-Scholes formula) being a highly non-linear function of implied volatility. We apply the smearing technique (Duan, 1983) in order to correct this bias. An alternative means of implementing the PBS approach is to use the market option price as the dependent variable and estimate the parameters of the IV equation by the method of non-linear least squares (NLLS). A problem we identify with this method is one of model incoherency: the IV equation that is estimated does not correspond to the set of option prices used to estimate it. We use the Monte Carlo method to verify that (1) standard PBS gives biased option values, both in-sample and out-of-sample; (2) using standard (log-linear) PBS with smearing almost completely eliminates the bias; (3) NLLS gives biased option values, but the bias is less severe than with standard PBS. We are led to conclude that, of the range of possible approaches to implementing PBS, log-linear PBS with smearing is preferred on the basis that it is the only approach that results in valuations with negligible bias

    First-principles design of next-generation nuclear fuels

    Get PDF
    The behavior of nuclear fuel in a reactor is a complex phenomenon that is influenced by a large number of materials properties, which include thermomechanical strength, chemical stability, microstructure, and defects. As a consequence, a comprehensive understanding of the fuel material behavior presents a significant modeling challenge, which must be mastered to improve the efficiency and reliability of current nuclear reactors. It is also essential to the development of advanced fuel materials for next-generation reactors. Over the last two decades, the use of density functional theory (DFT) has greatly contributed to our understanding by providing profound information on nuclear fuel materials, ranging from fundamental properties of f-electron systems to thermomechanical materials properties. This article briefly summarizes the main achievements of this first-principles computational methodology as it applies to nuclear fuel materials. Also, the current status of first-principles modeling is discussed, considering existing limitations and drawbacks such as size limitation and the added complexity associated with high temperature analysis. Finally, the future role of DFT modeling in the nuclear fuels industry is put into perspectiv
    • 

    corecore