2,404 research outputs found

    Adaptive intermittent control: A computational model explaining motor intermittency observed in human behavior

    Get PDF
    It is a fundamental question how our brain performs a given motor task in a real-time fashion with the slow sensorimotor system. Computational theory proposed an influential idea of feed-forward control, but it has mainly treated the case that the movement is ballistic (such as reaching) because the motor commands should be calculated in advance of movement execution. As a possible mechanism for operating feed-forward control in continuous motor tasks (such as target tracking), we propose a control model called "adaptive intermittent control" or "segmented control," that brain adaptively divides the continuous time axis into discrete segments and executes feed-forward control in each segment. The idea of intermittent control has been proposed in the fields of control theory, biological modeling and nonlinear dynamical system. Compared with these previous models, the key of the proposed model is that the system speculatively determines the segmentation based on the future prediction and its uncertainty. The result of computer simulation showed that the proposed model realized faithful visuo-manual tracking with realistic sensorimotor delays and with less computational costs (i.e., with fewer number of segments). Furthermore, it replicated "motor intermittency", that is, intermittent discontinuities commonly observed in human movement trajectories. We discuss that the temporally segmented control is an inevitable strategy for brain which has to achieve a given task with small computational (or cognitive) cost, using a slow control system in an uncertain variable environment, and the motor intermittency is the side-effect of this strategy

    Analysis of Software Aging in a Web Server

    Get PDF
    A number of recent studies have reported the phenomenon of “software aging”, characterized by progressive performance degradation and/or an increased occurrence rate of hang/crash failures of a software system due to the exhaustion of operating system resources or the accumulation of errors. To counteract this phenomenon, a proactive technique called 'software rejuvenation' has been proposed. It essentially involves stopping the running software, cleaning its internal state and/or its environment and then restarting it. Software rejuvenation, being preventive in nature, begs the question as to when to schedule it. Periodic rejuvenation, while straightforward to implement, may not yield the best results, because the rate at which software ages is not constant, but it depends on the time-varying system workload. Software rejuvenation should therefore be planned and initiated in the face of the actual system behavior. This requires the measurement, analysis and prediction of system resource usage. In this paper, we study the development of resource usage in a web server while subjecting it to an artificial workload. We first collect data on several system resource usage and activity parameters. Non-parametric statistical methods are then applied for detecting and estimating trends in the data sets. Finally, we fit time series models to the data collected. Unlike the models used previously in the research on software aging, these time series models allow for seasonal patterns, and we show how the exploitation of the seasonal variation can help in adequately predicting the future resource usage. Based on the models employed here, proactive management techniques like software rejuvenation triggered by actual measurements can be built. --Software aging,software rejuvenation,Linux,Apache,web server,performance monitoring,prediction of resource utilization,non-parametric trend analysis,time series analysis

    Discrete-time cost analysis for a telecommunication billing application with rejuvenation

    Get PDF
    AbstractSoftware rejuvenation is a proactive fault management technique that has been extensively studied in the recent literature. In this paper, we focus on an example for a telecommunication billing application considered in [1] and develop the discrete-time stochastic models to estimate the optimal software rejuvenation schedules. More precisely, two software cost models with rejuvenation are formulated via the discrete semi-Markov processes, and the optimal software rejuvenation schedules which minimize the expected costs per unit time in the steady state are derived analytically. Further, we develop statistically nonparametric algorithms to estimate the optimal software rejuvenation schedules, provided that the complete sample data of failure times are given. Then, a new statistical device, called discrete total time on test statistics, is introduced. Finally, we examine asymptotic properties for the statistical estimation algorithms proposed in this paper through a simulation experiment

    Adaptive gossip-based broadcast

    Get PDF
    This paper presents a novel adaptation mechanism that allows every node of a gossip-based broadcast algorithm to adjust the rate of message emission 1) to the amount of resources available to the nodes within the same broadcast group and 2) to the global level of congestion in the system. The adaptation mechanism can be applied to all gossip-based broadcast algorithms we know of and makes their use more realistic in practical situations where nodes have limited resources whose quantity changes dynamically with time without decreasing the reliability.(undefined

    Reusable rocket engine intelligent control system framework design, phase 2

    Get PDF
    Elements of an advanced functional framework for reusable rocket engine propulsion system control are presented for the Space Shuttle Main Engine (SSME) demonstration case. Functional elements of the baseline functional framework are defined in detail. The SSME failure modes are evaluated and specific failure modes identified for inclusion in the advanced functional framework diagnostic system. Active control of the SSME start transient is investigated, leading to the identification of a promising approach to mitigating start transient excursions. Key elements of the functional framework are simulated and demonstration cases are provided. Finally, the advanced function framework for control of reusable rocket engines is presented

    Two risky and costly end-user related taboos when developing information systems: qualifications and accountability

    Full text link
    This work highlights two critical taboos in organizations: 1)taking for granted the quality of certain capabilities and attitudes of the end-user representatives (EUR) in information systems development projects (ISDP), and 2) the EUR´s inherent accountability for losses in IS investments. These issues are neither addressed by theory nor research when assessing success/ failure. A triangulation approach was applied to combine quantitative and qualitative methods, having convergent results and showing that in problematic cases, paradoxically, the origin of IS rejection by end users (EU) points towards the EUR themselves. It has been evaluated to what extent some EUR factors impacted a macro ISDP involving an enterprise resource planning (ERP) package, ranking the ‘knowledge of the EUR’ as the main latent variable. The results validate some issues found throughout decades of praxis, confirming that when not properly managed the EUR role by itself has a direct relationship with IS rejection and significant losses in IS investments

    The Dark Energy Survey

    Get PDF
    We describe the Dark Energy Survey (DES), a proposed optical-near infrared survey of 5000 sq. deg of the South Galactic Cap to ~24th magnitude in SDSS griz, that would use a new 3 sq. deg CCD camera to be mounted on the Blanco 4-m telescope at Cerro Telolo Inter-American Observatory (CTIO). The survey data will allow us to measure the dark energy and dark matter densities and the dark energy equation of state through four independent methods: galaxy clusters, weak gravitational lensing tomography, galaxy angular clustering, and supernova distances. These methods are doubly complementary: they constrain different combinations of cosmological model parameters and are subject to different systematic errors. By deriving the four sets of measurements from the same data set with a common analysis framework, we will obtain important cross checks of the systematic errors and thereby make a substantial and robust advance in the precision of dark energy measurements.Comment: White Paper submitted to the Dark Energy Task Force, 42 page
    corecore