464 research outputs found

    Exploring Charge and Color Breaking vacuum in Non-Holomorphic MSSM

    Full text link
    Non-Holomorphic MSSM (NHSSM) shows various promising features that are not easily obtained in MSSM. However, the additional Non-Holomorphic (NH) trilinear interactions that attribute to the interesting phenomenological features, also modify the effective scalar potential of the model significantly. We derive analytic constraints involving trilinear parameters AtA_t' and AbA_b' that exclude global charge and color breaking minima (CCB). Since the analytic constraints are obtained considering specific directions in the multi-dimensional field space, we further probe the applicability of these constraints by exhaustive scan over NH parameter space with two different regimes of tanβ\tan\beta and delineate the nature of metastability by considering vacuum expectation values for third generation squarks. We adhere to a natural scenario by fixing Higgsino mass parameter (μ\mu) to a low value and estimate the allowed ranges of NH trilinear parameters by considering vacuum stability and observed properties of Higgs as the determining criteria.Comment: 29 pages, 10 figures, Results section elaborated, conclusion unchanged, published in JHE

    Learning from Past Conflict: Investigating the Time Scale of Conflict Learning for Cognitive Control Processes

    Get PDF
    Conflict-modulated cognitive control accounts posit that control processes adjust attention based on the probability of conflict associated with a given context (e.g., list of items, a particular item within a list, etc.). However, within these accounts, it is not yet fully understood how the control system learns about the probability of conflict. A specific question I address in the present research is how far back does the control system look to learn about the probability of conflict? In other words, what is the time scale of conflict learning for the control system? I use a statistical model recently developed by Aben et al. (2017) that captured the time scale of conflict learning for list-level control processes in a flanker task. The set of analyses I present shows that this model reliably captures the time scale of conflict learning for task-general, list-level control processes (Analysis 1 and Analysis 2). In addition, I also demonstrate that there are no differences in the time scale of conflict learning for differentially conflicting items within a list which are thought to engage item-level control (Analysis 3 and Analysis 4). I discuss potential reasons for the time scale patterns and the implications they may have for extant theories of cognitive control

    Variability and Location of Movement Endpoint Distributions: The Influence of Instructions for Movement Speed and Accuracy

    Get PDF
    An influential theory of motor control predicts that targeted hand movements should be aimed at the target center and that the variability of movement endpoint distributions should fill the target region (Meyer et al., 1988). Because increases in the amount of movement endpoint variability correlates with increases in movement speed (Schmidt et al., 1979), centering the distribution on the target center and expanding variability to the limits of the target boundaries should allow for maximization of movement speed, without the production of movement errors (i.e., target misses). Slifkin and Eder (2016) recently found that those predictions only held over a range of small target widths; however, as target width increased the endpoint distribution variability increasingly underestimated the variability permitted by the target boundaries and the location of the distribution center increasingly underestimated the target center. There was a strong relationship observed between the unutilized target region and aim points shifting away from the target center. Those results suggest that the downward shift in endpoint distribution location was based on “knowledge” of the amount of endpoint variability relative to the unused space in the target, and such downward shifts may reflect a reduction of travel costs (e.g., movement distance). Thus, there is a possibility that there is a link between unused space and how much distance minimization occurs. Here, we extend the results of Slifkin and Eder (2016) by explicitly manipulating endpoint distribution variability through a manipulation of task instructions, thereby allowing a more direct investigation of the link between unused space and distance minimization. The instructions emphasized either 1) movement accuracy, 2) both movement accuracy and speed, or 3) movement speed. Participants generated movements under different target width and amplitude requirement conditions. Variability increased as the emphasis on movement speed increased. In turn, as variability increased within a given target width condition, the amount of unused space within the target region decreased. The results provide support for the notion that the relation between aiming and knowledge of variability was maintained, but the nature of the relationship was influenced by the instruction conditions. The implications of these results on models of optimal motor control will be discussed

    Exploring viable vacua of the Z3Z_3-symmetric NMSSM

    Full text link
    We explore the vacua of the Z3Z_3-symmetric Next-to-Minimal Supersymmetric Standard Model (NMSSM) and their stability by going beyond the simplistic paradigm that works with a tree-level neutral scalar potential and adheres to some specific flat directions in the field space. Key effects are demonstrated by first studying the profiles of this potential under various circumstances of physical interest via a semi-analytical approach. The results thereof are compared to the ones obtained from a dedicated package like \veva ~which further incorporates the thermal effects to the potential. Regions of the phenomenological NMSSM (pNMSSM) parameter space that render the desired symmetry breaking (DSB) vacuum absolutely stable, long- or short-lived (in relation to the age of the Universe) under quantum/thermal tunneling are delineated. Regions that result in color and charge breaking (CCB) minima are also presented. It is demonstrated that light singlet scalars along with a light LSP (lightest supersymmetric particle) having an appreciable singlino admixture are compatible with a viable DSB vacuum and are much relevant for the collider experiments.Comment: 52 pages, 19 figures, 4 tables; matches with published versio

    A Kalman Filter Approach for Biomolecular Systems with Noise Covariance Updating

    Full text link
    An important part of system modeling is determining parameter values, particularly for biomolecular systems, where direct measurements of individual parameters are typically hard. While Extended Kalman Filters have been used for this purpose, the choice of the process noise covariance is generally unclear. In this chapter, we address this issue for biomolecular systems using a combination of Monte Carlo simulations and experimental data, exploiting the dependence of the process noise covariance on the states and parameters, as given in the Langevin framework. We adapt a Hybrid Extended Kalman Filtering technique by updating the process noise covariance at each time step based on estimates. We compare the performance of this framework with different fixed values of process noise covariance in biomolecular system models, including an oscillator model, as well as in experimentally measured data for a negative transcriptional feedback circuit. We find that the Extended Kalman Filter with such process noise covariance update is closer to the optimality condition in the sense that the innovation sequence becomes white and in achieving a balance between the mean square estimation error and parameter convergence time. The results of this chapter may help in the use of Extended Kalman Filters for systems where process noise covariance depends on states and/or parameters.Comment: 23 pages, 9 figure

    Overview of the DESI Legacy Imaging Surveys

    Get PDF
    The DESI Legacy Imaging Surveys (http://legacysurvey.org/) are a combination of three public projects (the Dark Energy Camera Legacy Survey, the Beijing–Arizona Sky Survey, and the Mayall z-band Legacy Survey) that will jointly image ≈14,000 deg^2 of the extragalactic sky visible from the northern hemisphere in three optical bands (g, r, and z) using telescopes at the Kitt Peak National Observatory and the Cerro Tololo Inter-American Observatory. The combined survey footprint is split into two contiguous areas by the Galactic plane. The optical imaging is conducted using a unique strategy of dynamically adjusting the exposure times and pointing selection during observing that results in a survey of nearly uniform depth. In addition to calibrated images, the project is delivering a catalog, constructed by using a probabilistic inference-based approach to estimate source shapes and brightnesses. The catalog includes photometry from the grz optical bands and from four mid-infrared bands (at 3.4, 4.6, 12, and 22 μm) observed by the Wide-field Infrared Survey Explorersatellite during its full operational lifetime. The project plans two public data releases each year. All the software used to generate the catalogs is also released with the data. This paper provides an overview of the Legacy Surveys project
    corecore