1,358,911 research outputs found

    Real-time computational photon-counting LiDAR

    Get PDF
    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles

    An Experimental Global Monitoring System for Rainfall-triggered Landslides using Satellite Remote Sensing Information

    Get PDF
    Landslides triggered by rainfall can possibly be foreseen in real time by jointly using rainfall intensity-duration thresholds and information related to land surface susceptibility. However, no system exists at either a national or a global scale to monitor or detect rainfall conditions that may trigger landslides due to the lack of extensive ground-based observing network in many parts of the world. Recent advances in satellite remote sensing technology and increasing availability of high-resolution geospatial products around the globe have provided an unprecedented opportunity for such a study. In this paper, a framework for developing an experimental real-time monitoring system to detect rainfall-triggered landslides is proposed by combining two necessary components: surface landslide susceptibility and a real-time space-based rainfall analysis system (http://trmm.gsfc.nasa.aov). First, a global landslide susceptibility map is derived from a combination of semi-static global surface characteristics (digital elevation topography, slope, soil types, soil texture, and land cover classification etc.) using a GIs weighted linear combination approach. Second, an adjusted empirical relationship between rainfall intensity-duration and landslide occurrence is used to assess landslide risks at areas with high susceptibility. A major outcome of this work is the availability of a first-time global assessment of landslide risk, which is only possible because of the utilization of global satellite remote sensing products. This experimental system can be updated continuously due to the availability of new satellite remote sensing products. This proposed system, if pursued through wide interdisciplinary efforts as recommended herein, bears the promise to grow many local landslide hazard analyses into a global decision-making support system for landslide disaster preparedness and risk mitigation activities across the world

    Using a high fidelity CCGT simulator for building prognostic systems

    Get PDF
    Pressure to reduce maintenance costs in power utilities has resulted in growing interest in prognostic monitoring systems. Accurate prediction of the occurrence of faults and failures would result not only in improved system maintenance schedules but also in improved availability and system efficiency. The desire for such a system has driven research into the emerging field of prognostics for complex systems. At the same time there is a general move towards implementing high fidelity simulators of complex systems especially within the power generation field, with the nuclear power industry taking the lead. Whilst the simulators mainly function in a training capacity, the high fidelity of the simulations can also allow representative data to be gathered. Using simulators in this way enables systems and components to be damaged, run to failure and reset all without cost or danger to personnel as well as allowing fault scenarios to be run faster than real time. Consequently, this allows failure data to be gathered which is normally otherwise unavailable or limited, enabling analysis and research of fault progression in critical and high value systems. This paper presents a case study of utilising a high fidelity industrial Combined Cycle Gas Turbine (CCGT) simulator to generate fault data, and shows how this can be employed to build a prognostic system. Advantages and disadvantages of this approach are discussed

    Parallelizing Windowed Stream Joins in a Shared-Nothing Cluster

    Full text link
    The availability of large number of processing nodes in a parallel and distributed computing environment enables sophisticated real time processing over high speed data streams, as required by many emerging applications. Sliding window stream joins are among the most important operators in a stream processing system. In this paper, we consider the issue of parallelizing a sliding window stream join operator over a shared nothing cluster. We propose a framework, based on fixed or predefined communication pattern, to distribute the join processing loads over the shared-nothing cluster. We consider various overheads while scaling over a large number of nodes, and propose solution methodologies to cope with the issues. We implement the algorithm over a cluster using a message passing system, and present the experimental results showing the effectiveness of the join processing algorithm.Comment: 11 page

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    Traffic characterization in a communications channel for monitoring and control in real-time systems

    Get PDF
    The response time for remote monitoring and control in real-time systems is a sensitive issue in device interconnection elements. Therefore, it is necessary to analyze the traffic of the communication system in pre-established time windows. In this paper, a methodology based on computational intelligence is proposed for identifying the availability of a data channel and the variables or characteristics that affect the performance and data transfer, which is made up of four stages: a) integration of a communication system with an acquisition module and a final control structure; b) communication channel characterization by means of traffic variables; and c) relevance analysis from the characterization space using SFFS (sequential forward oating selection); d) Channel congestion classification as Low or High using a classifier based on Naive Bayes algorithm. The experimental setup emulates a real process using an on/off remote control of a DC motor on an Ethernet network. The communication time between the client and server was integrated with the operation and control times, to study the whole response time. This proposed approach allows support decisions about channel availability, to establish predictions about the length of the time window when the availability conditions are unknown

    Delay-Based Controller Design for Continuous-Time and Hybrid Applications

    Get PDF
    Motivated by the availability of different types of delays in embedded systems and biological circuits, the objective of this work is to study the benefits that delay can provide in simplifying the implementation of controllers for continuous-time systems. Given a continuous-time linear time-invariant (LTI) controller, we propose three methods to approximate this controller arbitrarily precisely by a simple controller composed of delay blocks, a few integrators and possibly a unity feedback. Different problems associated with the approximation procedures, such as finding the optimal number of delay blocks or studying the robustness of the designed controller with respect to delay values, are then investigated. We also study the design of an LTI continuous-time controller satisfying given control objectives whose delay-based implementation needs the least number of delay blocks. A direct application of this work is in the sampled-data control of a real-time embedded system, where the sampling frequency is relatively high and/or the output of the system is sampled irregularly. Based on our results on delay-based controller design, we propose a digital-control scheme that can implement every continuous-time stabilizing (LTI) controller. Unlike a typical sampled-data controller, the hybrid controller introduced here -— consisting of an ideal sampler, a digital controller, a number of modified second-order holds and possibly a unity feedback -— is robust to sampling jitter and can operate at arbitrarily high sampling frequencies without requiring expensive, high-precision computation

    Proactive Fault Tolerance Through Cloud Failure Prediction Using Machine Learning

    Get PDF
    One of the crucial aspects of cloud infrastructure is fault tolerance, and its primary responsibility is to address the situations that arise when different architectural parts fail. A sizeable cloud data center must deliver high service dependability and availability while minimizing failure incidence. However, modern large cloud data centers continue to have significant failure rates owing to a variety of factors, including hardware and software faults, which often lead to task and job failures. To reduce unexpected loss, it is critical to forecast task or job failures with high accuracy before they occur. This research examines the performance of four machine learning (ML) algorithms for forecasting failure in a real-time cloud environment to increase system availability using real-time data gathered from the Google Cluster Workload Traces 2019. We applied four distinct supervised machine learning algorithms are logistic regression, KNN, SVM, decision tree, and logistic regression classifiers. Confusion matrices as well as ROC curves were used to assess the reliability and robustness of each algorithm. This study will assist cloud service providers developing a robust fault tolerance design by optimizing device selection, consequently boosting system availability and eliminating unexpected system downtime

    Energy-Efficient Fault-Tolerant Scheduling Algorithm for Real-Time Tasks in Cloud-Based 5G Networks

    Full text link
    © 2013 IEEE. Green computing has become a hot issue for both academia and industry. The fifth-generation (5G) mobile networks put forward a high request for energy efficiency and low latency. The cloud radio access network provides efficient resource use, high performance, and high availability for 5G systems. However, hardware and software faults of cloud systems may lead to failure in providing real-time services. Developing fault tolerance technique can efficiently enhance the reliability and availability of real-time cloud services. The core idea of fault-tolerant scheduling algorithm is introducing redundancy to ensure that the tasks can be finished in the case of permanent or transient system failure. Nevertheless, the redundancy incurs extra overhead for cloud systems, which results in considerable energy consumption. In this paper, we focus on the problem of how to reduce the energy consumption when providing fault tolerance. We first propose a novel primary-backup-based fault-tolerant scheduling architecture for real-time tasks in the cloud environment. Based on the architecture, we present an energy-efficient fault-tolerant scheduling algorithm for real-time tasks (EFTR). EFTR adopts a proactive strategy to increase the system processing capacity and employs a rearrangement mechanism to improve the resource utilization. Simulation experiments are conducted on the CloudSim platform to evaluate the feasibility and effectiveness of EFTR. Compared with the existing fault-tolerant scheduling algorithms, EFTR shows excellent performance in energy conservation and task schedulability
    corecore