330,463 research outputs found

    Group integrative dynamic factor models for inter- and intra-subject brain networks

    Full text link
    This work introduces a novel framework for dynamic factor model-based data integration of multiple subjects, called GRoup Integrative DYnamic factor models (GRIDY). The framework facilitates the determination of inter-subject differences between two pre-labeled groups by considering a combination of group spatial information and individual temporal dependence. Furthermore, it enables the identification of intra-subject differences over time by employing different model configurations for each subject. Methodologically, the framework combines a novel principal angle-based rank selection algorithm and a non-iterative integrative analysis framework. Inspired by simultaneous component analysis, this approach also reconstructs identifiable latent factor series with flexible covariance structures. The performance of the framework is evaluated through simulations conducted under various scenarios and the analysis of resting-state functional MRI data collected from multiple subjects in both the Autism Spectrum Disorder group and the control group

    Functional dynamic factor models with application to yield curve forecasting

    Get PDF
    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation-maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS551 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A New Estimative Current Mode Control Technique for DC-DC Converters Operating in Discontinuous Conduction Mode

    Get PDF
    A new digital control technique for power converters operating in discontinuous conduction mode (DCM) is introduced and applied to a boost converter. In contrast to the conventional analogue control methods, the principal idea of this new control scheme is to use real-time analysis and estimate the required on-time of the switch based on the dynamic of the system. The proposed control algorithm can easily be programmed in a digital signal processor (DSP). This novel technique is applicable to any converter operating in DCM including power factor correctors (PFC). However, this work mainly focuses on the boost topology. In this paper, the main mathematical concept of the new control algorithm is introduced, as well as the robustness investigation of the proposed method, simulation, and experimental results

    Dynamic Inference in Probabilistic Graphical Models

    Get PDF
    Probabilistic graphical models, such as Markov random fields (MRFs), are useful for describing high-dimensional distributions in terms of local dependence structures. The probabilistic inference is a fundamental problem related to graphical models, and sampling is a main approach for the problem. In this paper, we study probabilistic inference problems when the graphical model itself is changing dynamically with time. Such dynamic inference problems arise naturally in today's application, e.g.~multivariate time-series data analysis and practical learning procedures. We give a dynamic algorithm for sampling-based probabilistic inferences in MRFs, where each dynamic update can change the underlying graph and all parameters of the MRF simultaneously, as long as the total amount of changes is bounded. More precisely, suppose that the MRF has nn variables and polylogarithmic-bounded maximum degree, and N(n)N(n) independent samples are sufficient for the inference for a polynomial function N()N(\cdot). Our algorithm dynamically maintains an answer to the inference problem using O~(nN(n))\widetilde{O}(n N(n)) space cost, and O~(N(n)+n)\widetilde{O}(N(n) + n) incremental time cost upon each update to the MRF, as long as the well-known Dobrushin-Shlosman condition is satisfied by the MRFs. Compared to the static case, which requires Ω(nN(n))\Omega(n N(n)) time cost for redrawing all N(n)N(n) samples whenever the MRF changes, our dynamic algorithm gives a Ω~(min{n,N(n)})\widetilde\Omega(\min\{n, N(n)\})-factor speedup. Our approach relies on a novel dynamic sampling technique, which transforms local Markov chains (a.k.a. single-site dynamics) to dynamic sampling algorithms, and an "algorithmic Lipschitz" condition that we establish for sampling from graphical models, namely, when the MRF changes by a small difference, samples can be modified to reflect the new distribution, with cost proportional to the difference on MRF

    MC-ADAPT: Adaptive Task Dropping in Mixed-Criticality Scheduling

    Get PDF
    Recent embedded systems are becoming integrated systems with components of different criticality. To tackle this, mixed-criticality systems aim to provide different levels of timing assurance to components of different criticality levels while achieving efficient resource utilization. Many approaches have been proposed to execute more lower-criticality tasks without affecting the timeliness of higher-criticality tasks. Those previous approaches however have at least one of the two limitations; i) they penalize all lower-criticality tasks at once upon a certain situation, or ii) they make the decision how to penalize lowercriticality tasks at design time. As a consequence, they underutilize resources by imposing an excessive penalty on lowcriticality tasks. Unlike those existing studies, we present a novel framework, called MC-ADAPT, that aims to minimally penalize lower-criticality tasks by fully reflecting the dynamically changing system behavior into adaptive decision making. Towards this, we propose a new scheduling algorithm and develop its runtime schedulability analysis capable of capturing the dynamic system state. Our proposed algorithm adaptively determines which task to drop based on the runtime analysis. To determine the quality of task dropping solution, we propose the speedup factor for task dropping while the conventional use of the speedup factor only evaluates MC scheduling algorithms in terms of the worst-case schedulability. We apply the speedup factor for a newly-defined task dropping problem that evaluates task dropping solution under different runtime scheduling scenarios. We derive that MC-ADAPT has a speedup factor of 1.619 for task drop. This implies that MC-ADAPT can behave the same as the optimal scheduling algorithm with optimal task dropping strategy does under any runtime scenario if the system is sped up by a factor of 1.619

    Towards Fair Disentangled Online Learning for Changing Environments

    Full text link
    In the problem of online learning for changing environments, data are sequentially received one after another over time, and their distribution assumptions may vary frequently. Although existing methods demonstrate the effectiveness of their learning algorithms by providing a tight bound on either dynamic regret or adaptive regret, most of them completely ignore learning with model fairness, defined as the statistical parity across different sub-population (e.g., race and gender). Another drawback is that when adapting to a new environment, an online learner needs to update model parameters with a global change, which is costly and inefficient. Inspired by the sparse mechanism shift hypothesis, we claim that changing environments in online learning can be attributed to partial changes in learned parameters that are specific to environments and the rest remain invariant to changing environments. To this end, in this paper, we propose a novel algorithm under the assumption that data collected at each time can be disentangled with two representations, an environment-invariant semantic factor and an environment-specific variation factor. The semantic factor is further used for fair prediction under a group fairness constraint. To evaluate the sequence of model parameters generated by the learner, a novel regret is proposed in which it takes a mixed form of dynamic and static regret metrics followed by a fairness-aware long-term constraint. The detailed analysis provides theoretical guarantees for loss regret and violation of cumulative fairness constraints. Empirical evaluations on real-world datasets demonstrate our proposed method sequentially outperforms baseline methods in model accuracy and fairness.Comment: Accepted by KDD 202

    A New Synergistic Forecasting Method for Short-Term Traffic Flow with Event-Triggered Strong Fluctuation

    Get PDF
    Directing against the shortcoming of low accuracy in short-term traffic flow prediction caused by strong traffic flow fluctuation, a novel method for short-term traffic forecasting based on the combination of improved grey Verhulst prediction algorithm and first-order difference exponential smoothing is proposed. Firstly, we constructed an improved grey Verhulst prediction model by introducing the Markov chain to its traditional version. Then, based on an introduced dynamic weighting factor, the improved grey Verhulst prediction method, and the first-order difference exponential smoothing technique, the new method for short-term traffic forecasting is completed in an efficient way. Finally, experiment and analysis are carried out in the light of actual data gathered from strong fluctuation environment to verify the effectiveness and rationality of our proposed scheme
    corecore