47,408 research outputs found

    A Taxonomy for Congestion Control Algorithms in Vehicular Ad Hoc Networks

    Full text link
    One of the main criteria in Vehicular Ad hoc Networks (VANETs) that has attracted the researchers' consideration is congestion control. Accordingly, many algorithms have been proposed to alleviate the congestion problem, although it is hard to find an appropriate algorithm for applications and safety messages among them. Safety messages encompass beacons and event-driven messages. Delay and reliability are essential requirements for event-driven messages. In crowded networks where beacon messages are broadcasted at a high number of frequencies by many vehicles, the Control Channel (CCH), which used for beacons sending, will be easily congested. On the other hand, to guarantee the reliability and timely delivery of event-driven messages, having a congestion free control channel is a necessity. Thus, consideration of this study is given to find a solution for the congestion problem in VANETs by taking a comprehensive look at the existent congestion control algorithms. In addition, the taxonomy for congestion control algorithms in VANETs is presented based on three classes, namely, proactive, reactive and hybrid. Finally, we have found the criteria in which fulfill prerequisite of a good congestion control algorithm

    A machine learning-based framework for preventing video freezes in HTTP adaptive streaming

    Get PDF
    HTTP Adaptive Streaming (HAS) represents the dominant technology to deliver videos over the Internet, due to its ability to adapt the video quality to the available bandwidth. Despite that, HAS clients can still suffer from freezes in the video playout, the main factor influencing users' Quality of Experience (QoE). To reduce video freezes, we propose a network-based framework, where a network controller prioritizes the delivery of particular video segments to prevent freezes at the clients. This framework is based on OpenFlow, a widely adopted protocol to implement the software-defined networking principle. The main element of the controller is a Machine Learning (ML) engine based on the random undersampling boosting algorithm and fuzzy logic, which can detect when a client is close to a freeze and drive the network prioritization to avoid it. This decision is based on measurements collected from the network nodes only, without any knowledge on the streamed videos or on the clients' characteristics. In this paper, we detail the design of the proposed ML-based framework and compare its performance with other benchmarking HAS solutions, under various video streaming scenarios. Particularly, we show through extensive experimentation that the proposed approach can reduce video freezes and freeze time with about 65% and 45% respectively, when compared to benchmarking algorithms. These results represent a major improvement for the QoE of the users watching multimedia content online

    SPARCS: Stream-processing architecture applied in real-time cyber-physical security

    Get PDF
    In this paper, we showcase a complete, end-To-end, fault tolerant, bandwidth and latency optimized architecture for real time utilization of data from multiple sources that allows the collection, transport, storage, processing, and display of both raw data and analytics. This architecture can be applied for a wide variety of applications ranging from automation/control to monitoring and security. We propose a practical, hierarchical design that allows easy addition and reconfiguration of software and hardware components, while utilizing local processing of data at sensor or field site ('fog computing') level to reduce latency and upstream bandwidth requirements. The system supports multiple fail-safe mechanisms to guarantee the delivery of sensor data. We describe the application of this architecture to cyber-physical security (CPS) by supporting security monitoring of an electric distribution grid, through the collection and analysis of distribution-grid level phasor measurement unit (PMU) data, as well as Supervisory Control And Data Acquisition (SCADA) communication in the control area network

    Input Prioritization for Testing Neural Networks

    Full text link
    Deep neural networks (DNNs) are increasingly being adopted for sensing and control functions in a variety of safety and mission-critical systems such as self-driving cars, autonomous air vehicles, medical diagnostics, and industrial robotics. Failures of such systems can lead to loss of life or property, which necessitates stringent verification and validation for providing high assurance. Though formal verification approaches are being investigated, testing remains the primary technique for assessing the dependability of such systems. Due to the nature of the tasks handled by DNNs, the cost of obtaining test oracle data---the expected output, a.k.a. label, for a given input---is high, which significantly impacts the amount and quality of testing that can be performed. Thus, prioritizing input data for testing DNNs in meaningful ways to reduce the cost of labeling can go a long way in increasing testing efficacy. This paper proposes using gauges of the DNN's sentiment derived from the computation performed by the model, as a means to identify inputs that are likely to reveal weaknesses. We empirically assessed the efficacy of three such sentiment measures for prioritization---confidence, uncertainty, and surprise---and compare their effectiveness in terms of their fault-revealing capability and retraining effectiveness. The results indicate that sentiment measures can effectively flag inputs that expose unacceptable DNN behavior. For MNIST models, the average percentage of inputs correctly flagged ranged from 88% to 94.8%

    A situational approach for the definition and tailoring of a data-driven software evolution method

    Get PDF
    Successful software evolution heavily depends on the selection of the right features to be included in the next release. Such selection is difficult, and companies often report bad experiences about user acceptance. To overcome this challenge, there is an increasing number of approaches that propose intensive use of data to drive evolution. This trend has motivated the SUPERSEDE method, which proposes the collection and analysis of user feedback and monitoring data as the baseline to elicit and prioritize requirements, which are then used to plan the next release. However, every company may be interested in tailoring this method depending on factors like project size, scope, etc. In order to provide a systematic approach, we propose the use of Situational Method Engineering to describe SUPERSEDE and guide its tailoring to a particular context.Peer ReviewedPostprint (author's final draft

    Challenges to generating political prioritization for adolescent sexual and reproductive health in Kenya: A qualitative study.

    Get PDF
    BackgroundDespite the high burden of adverse adolescent sexual and reproductive health (SRH) outcomes, it has remained a low political priority in Kenya. We examined factors that have shaped the lack of current political prioritization of adolescent SRH service provision.MethodsWe used the Shiffman and Smith policy framework consisting of four categories-actor power, ideas, political contexts, and issue characteristics-to analyse factors that have shaped political prioritization of adolescent SRH. We undertook semi-structured interviews with 14 members of adolescent SRH networks between February and April 2019 at the national level and conducted thematic analysis of the interviews.FindingsSeveral factors hinder the attainment of political priority for adolescent SRH in Kenya. On actor power, the adolescent SRH community was diverse and united in adoption of international norms and policies, but lacked policy entrepreneurs to provide strong leadership, and policy windows were often missed. Regarding ideas, community members lacked consensus on a cohesive public positioning of the problem. On issue characteristics, the perception of adolescents as lacking political power made politicians reluctant to act on the existing data on the severity of adolescent SRH. There was also a lack of consensus on the nature of interventions to be implemented. Pertaining to political contexts, sectoral funding by donors and government treasury brought about tension within the different government ministries resulting in siloed approaches, lack of coordination and overall inefficiency. However, the SRH community has several strengths that augur well for future political support. These include the diverse multi-sectoral background of its members, commitment to improving adolescent SRH, and the potential to link with other health priorities such as maternal health and HIV/AIDS.ConclusionIn order to increase political attention to adolescent SRH in Kenya, there is an urgent need for policy actors to: 1) create a more cohesive community of advocates across sectors, 2) develop a clearer public positioning of adolescent SRH, 3) agree on a set of precise approaches that will resonate with the political system, and 4) identify and nurture policy entrepreneurs to facilitate the coupling of adolescent SRH with potential solutions when windows of opportunity arise

    Automated Global Feature Analyzer - A Driver for Tier-Scalable Reconnaissance

    Get PDF
    For the purposes of space flight, reconnaissance field geologists have trained to become astronauts. However, the initial forays to Mars and other planetary bodies have been done by purely robotic craft. Therefore, training and equipping a robotic craft with the sensory and cognitive capabilities of a field geologist to form a science craft is a necessary prerequisite. Numerous steps are necessary in order for a science craft to be able to map, analyze, and characterize a geologic field site, as well as effectively formulate working hypotheses. We report on the continued development of the integrated software system AGFA: automated global feature analyzerreg, originated by Fink at Caltech and his collaborators in 2001. AGFA is an automatic and feature-driven target characterization system that operates in an imaged operational area, such as a geologic field site on a remote planetary surface. AGFA performs automated target identification and detection through segmentation, providing for feature extraction, classification, and prioritization within mapped or imaged operational areas at different length scales and resolutions, depending on the vantage point (e.g., spaceborne, airborne, or ground). AGFA extracts features such as target size, color, albedo, vesicularity, and angularity. Based on the extracted features, AGFA summarizes the mapped operational area numerically and flags targets of "interest", i.e., targets that exhibit sufficient anomaly within the feature space. AGFA enables automated science analysis aboard robotic spacecraft, and, embedded in tier-scalable reconnaissance mission architectures, is a driver of future intelligent and autonomous robotic planetary exploration
    corecore