9,615 research outputs found

    Neighbour replica affirmative adaptive failure detection and autonomous recovery

    Get PDF
    High availability is an important property for current distributed systems. The trends of current distributed systems such as grid computing and cloud computing are the delivery of computing as a service rather than a product. Thus, current distributed systems rely more on the highly available systems. The potential to fail-stop failure in distributed computing systems is a significant disruptive factor for high availability distributed system. Hence, a new failure detection approach in a distributed system called Affirmative Adaptive Failure Detection (AAFD) is introduced. AAFD utilises heartbeat for node monitoring. Subsequently, Neighbour Replica Failure Recovery(NRFR) is proposed for autonomous recovery in distributed systems. AAFD can be classified as an adaptive failure detector, since it can adapt to the unpredictable network conditions and CPU loads. NRFR utilises the advantages of the neighbour replica distributed technique (NRDT) and combines with weighted priority selection in order to achieve high availability, since automatic failure recovery through continuous monitoring approach is essential in current high availability distributed system. The environment is continuously monitored by AAFD while auto-reconfiguring environment for automating failure recovery is managed by NRFR. The NRFR and AAFD are evaluated through virtualisation implementation. The results showed that the AAFD is 30% better than other detection techniques. While for recovery performance, the NRFR outperformed the others only with an exception to recovery in two distributed technique (TRDT). Subsequently, a realistic logical structure is modelled in complex and interdependent distributed environment for NRDT and TRDT. The model prediction showed that NRDT availability is 38.8% better than TRDT. Thus, the model proved that NRDT is the ideal replication environment for practical failure recovery in complex distributed systems. Hence, with the ability to minimise the Mean Time To Repair (MTTR) significantly and maximise Mean Time Between Failure (MTBF), this research has accomplished the goal to provide high availability self sustainable distributed system

    Technology transfer: Transportation

    Get PDF
    Standard Research Institute (SRI) has operated a NASA-sponsored team for four years. The SRI Team is concentrating on solving problems in the public transportation area and on developing methods for decreasing the time gap between the development and the marketing of new technology and for aiding the movement of knowledge across industrial, disciplinary, and regional boundaries. The SRI TAT has developed a methodology that includes adaptive engineering of the aerospace technology and commercialization when a market is indicated. The SRI Team has handled highway problems on a regional rather than a state basis, because many states in similar climatic or geologic regions have similar problems. Program exposure has been increased to encompass almost all of the fifty states

    Understanding and Diagnosing Visual Tracking Systems

    Full text link
    Several benchmark datasets for visual tracking research have been proposed in recent years. Despite their usefulness, whether they are sufficient for understanding and diagnosing the strengths and weaknesses of different trackers remains questionable. To address this issue, we propose a framework by breaking a tracker down into five constituent parts, namely, motion model, feature extractor, observation model, model updater, and ensemble post-processor. We then conduct ablative experiments on each component to study how it affects the overall result. Surprisingly, our findings are discrepant with some common beliefs in the visual tracking research community. We find that the feature extractor plays the most important role in a tracker. On the other hand, although the observation model is the focus of many studies, we find that it often brings no significant improvement. Moreover, the motion model and model updater contain many details that could affect the result. Also, the ensemble post-processor can improve the result substantially when the constituent trackers have high diversity. Based on our findings, we put together some very elementary building blocks to give a basic tracker which is competitive in performance to the state-of-the-art trackers. We believe our framework can provide a solid baseline when conducting controlled experiments for visual tracking research

    Optimal Error Rates for Interactive Coding I: Adaptivity and Other Settings

    Full text link
    We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable error-rates in a number of different settings. Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. Braverman and Rao [STOC'11] show that non-adaptively one can code for any constant error rate below 1/4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2/7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of Franklin et al. [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1/2 to 2/3. For list-decodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1/2. Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computation are polynomially bounded. Most prior work considered coding schemes with linear amount of communication, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications

    Hidden covariation detection produces faster, not slower, social judgments

    Get PDF
    In Lewicki’s (1986a) demonstration of Hidden Co-variation Detection (HCD), responses were slower to faces that corresponded with a co-variation encountered previously than to faces with novel co-variations. This slowing contrasts with the typical finding that priming leads to faster responding, and might suggest that HCD is a unique type of implicit process. We extended Lewicki’s (1986a) methodology and showed that participants exposed to nonsalient co-variations between hair length and personality were subsequently faster to respond to faces with those co-variations than to faces without, despite lack of awareness of the critical co-variations. This result confirms that people can detect subtle relationships between features of stimuli and that, as with other types of implicit cognition, this detection facilitates responding.</p

    Disk failure prediction based on multi-layer domain adaptive learning

    Full text link
    Large scale data storage is susceptible to failure. As disks are damaged and replaced, traditional machine learning models, which rely on historical data to make predictions, struggle to accurately predict disk failures. This paper presents a novel method for predicting disk failures by leveraging multi-layer domain adaptive learning techniques. First, disk data with numerous faults is selected as the source domain, and disk data with fewer faults is selected as the target domain. A training of the feature extraction network is performed with the selected origin and destination domains. The contrast between the two domains facilitates the transfer of diagnostic knowledge from the domain of source and target. According to the experimental findings, it has been demonstrated that the proposed technique can generate a reliable prediction model and improve the ability to predict failures on disk data with few failure samples

    The Search for Invariance: Repeated Positive Testing Serves the Goals of Causal Learning

    Get PDF
    Positive testing is characteristic of exploratory behavior, yet it seems to be at odds with the aim of information seeking. After all, repeated demonstrations of one’s current hypothesis often produce the same evidence and fail to distinguish it from potential alternatives. Research on the development of scientific reasoning and adult rule learning have both documented and attempted to explain this behavior. The current chapter reviews this prior work and introduces a novel theoretical account—the Search for Invariance (SI) hypothesis—which suggests that producing multiple positive examples serves the goals of causal learning. This hypothesis draws on the interventionist framework of causal reasoning, which suggests that causal learners are concerned with the invariance of candidate hypotheses. In a probabilistic and interdependent causal world, our primary goal is to determine whether, and in what contexts, our causal hypotheses provide accurate foundations for inference and intervention—not to disconfirm their alternatives. By recognizing the central role of invariance in causal learning, the phenomenon of positive testing may be reinterpreted as a rational information-seeking strategy
    • …
    corecore