15,836 research outputs found

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    High-entropy high-hardness metal carbides discovered by entropy descriptors

    Get PDF
    High-entropy materials have attracted considerable interest due to the combination of useful properties and promising applications. Predicting their formation remains the major hindrance to the discovery of new systems. Here we propose a descriptor - entropy forming ability - for addressing synthesizability from first principles. The formalism, based on the energy distribution spectrum of randomized calculations, captures the accessibility of equally-sampled states near the ground state and quantifies configurational disorder capable of stabilizing high-entropy homogeneous phases. The methodology is applied to disordered refractory 5-metal carbides - promising candidates for high-hardness applications. The descriptor correctly predicts the ease with which compositions can be experimentally synthesized as rock-salt high-entropy homogeneous phases, validating the ansatz, and in some cases, going beyond intuition. Several of these materials exhibit hardness up to 50% higher than rule of mixtures estimations. The entropy descriptor method has the potential to accelerate the search for high-entropy systems by rationally combining first principles with experimental synthesis and characterization.Comment: 12 pages, 2 figure

    The role of admission control in assuring multiple services quality

    Get PDF
    Considering that network overprovisioning by itself is not always an attainable and everlasting solution, Admission Control (AC) mechanisms are recommended to keep network load controlled and assure the required service quality levels. This article debates the role of AC in multiservice IP networks, providing an overview and discussion of current and representative AC approaches, highlighting their main characteristics, pros and cons regarding the management of network services quality. In this debate, particular emphasis is given to an enhanced monitoring-based AC proposal for assuring multiple service levels in multiclass networks.Centro de Ciências e Tecnologias da Computação do Departamento de Informática da Universidade do Minho (CCTC

    Attentional Narrowing: Triggering, Detecting and Overcoming a Threat to Safety.

    Full text link
    In complex safety-critical domains, such as aviation or medicine, considerable multitasking requirements and attentional demands are imposed on operators who may, during off-nominal events, also experience high levels of anxiety. High task load and anxiety can trigger attentional narrowing – an involuntary reduction in the range of cues that can be utilized by an operator. As evidenced by numerous accidents, attentional narrowing is a highly undesirable and potentially dangerous state as it hampers information gathering, reasoning, and problem solving. However, because the problem is difficult to reproduce in controlled environments, little is known about its triggers, markers and possible countermeasures. Therefore, the goals of this dissertation were to (1) identify reliable triggers of attentional narrowing in controlled laboratory settings, (2) identify real-time markers of attentional narrowing that can also distinguish that phenomenon from focused attention – another state of reduced attentional field that, contrary to attentional narrowing, is deliberate and often desirable, (3) develop and test display designs that help overcome the narrowing of the attentional field. Based on a series of experiments in the context of a visual search task and a multi-tasking environment, novel unsolvable problems were identified as the most reliable trigger of attentional narrowing. Eye tracking was used successfully to detect and trace the phenomenon. Specifically, three eye tracking metrics emerged as promising markers of attentional narrowing: (1) the percentage of fixations, (2) dwell duration and (3) fixation duration in the display area where the novel problem was presented. These metrics were used to develop an algorithm capable of detecting attentional narrowing in real time and distinguishing it from focused attention. A command display (as opposed to status) was shown to support participants in broadening their attentional field and improving their time sharing performance. This dissertation contributes to the knowledge base in attentional narrowing and, more generally, attention management. A novel eye tracking based technique for detecting the attentional state and a promising countermeasure to the problem were developed. Overall, the findings from this research contribute to improved safety and performance in a range of complex high-risk domains.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/135773/1/jprinet_1.pd

    Envelope Determinants of Equine Lentiviral Vaccine Protection

    Get PDF
    Lentiviral envelope (Env) antigenic variation and associated immune evasion present major obstacles to vaccine development. The concept that Env is a critical determinant for vaccine efficacy is well accepted, however defined correlates of protection associated with Env variation have yet to be determined. We reported an attenuated equine infectious anemia virus (EIAV) vaccine study that directly examined the effect of lentiviral Env sequence variation on vaccine efficacy. The study identified a significant, inverse, linear correlation between vaccine efficacy and increasing divergence of the challenge virus Env gp90 protein compared to the vaccine virus gp90. The report demonstrated approximately 100% protection of immunized ponies from disease after challenge by virus with a homologous gp90 (EV0), and roughly 40% protection against challenge by virus (EV13) with a gp90 13% divergent from the vaccine strain. In the current study we examine whether the protection observed when challenging with the EV0 strain could be conferred to animals via chimeric challenge viruses between the EV0 and EV13 strains, allowing for mapping of protection to specific Env sequences. Viruses containing the EV13 proviral backbone and selected domains of the EV0 gp90 were constructed and in vitro and in vivo infectivity examined. Vaccine efficacy studies indicated that homology between the vaccine strain gp90 and the N-terminus of the challenge strain gp90 was capable of inducing immunity that resulted in significantly lower levels of post-challenge virus and significantly delayed the onset of disease. However, a homologous N-terminal region alone inserted in the EV13 backbone could not impart the 100% protection observed with the EV0 strain. Data presented here denote the complicated and potentially contradictory relationship between in vitro virulence and in vivo pathogenicity. The study highlights the importance of structural conformation for immunogens and emphasizes the need for antibody binding, not neutralizing, assays that correlate with vaccine protection. © 2013 Craigo et al

    More is more in language learning:reconsidering the less-is-more hypothesis

    Get PDF
    The Less-is-More hypothesis was proposed to explain age-of-acquisition effects in first language (L1) acquisition and second language (L2) attainment. We scrutinize different renditions of the hypothesis by examining how learning outcomes are affected by (1) limited cognitive capacity, (2) reduced interference resulting from less prior knowledge, and (3) simplified language input. While there is little-to-no evidence of benefits of limited cognitive capacity, there is ample support for a More-is-More account linking enhanced capacity with better L1- and L2-learning outcomes, and reduced capacity with childhood language disorders. Instead, reduced prior knowledge (relative to adults) may afford children with greater flexibility in inductive inference; this contradicts the idea that children benefit from a more constrained hypothesis space. Finally, studies of childdirected speech (CDS) confirm benefits from less complex input at early stages, but also emphasize how greater lexical and syntactic complexity of the input confers benefits in L1-attainment

    AN INTELLIGENT PASSIVE ISLANDING DETECTION AND CLASSIFICATION SCHEME FOR A RADIAL DISTRIBUTION SYSTEM

    Get PDF
    Distributed generation (DG) provides users with a dependable and cost-effective source of electricity. These are directly connected to the distribution system at customer load locations. Integration of DG units into an existing system has significantly high importance due to its innumerable advantages. The high penetration level of distributed generation (DG) provides vast techno-economic and environmental benefits, such as high reliability, reduced total system losses, efficiency, low capital cost, abundant in nature, and low carbon emissions. However, one of the most challenges in microgrids (MG) is the island mode operations of DGs. the effective detection of islanding and rapid DG disconnection is essential to prevent safety problems and equipment damage. The most prevalent islanding protection scheme is based on passive techniques that cause no disruption to the system but have extensive non-detection zones. As a result, the thesis tries to design a simple and effective intelligent passive islanding detection approach using a CatBoost classifier, as well as features collected from three-phase voltages and instantaneous power per phase visible at the DG terminal. This approach enables initial features to be extracted using the Gabor transform (GT) technique. This signal processing (SP) technique illustrates the time-frequency representation of the signal, revealing several hidden features of the processed signals to be the input of the intelligent classifier. A radial distribution system with two DG units was utilized to evaluate the effectiveness of the proposed islanding detection method. The effectiveness of the proposed islanding detection method was verified by comparing its results to those of other methods that use a random forest (RF) or a basic artificial neural network (ANN) as a classifier. This was accomplished through extensive simulations using the DigSILENT Power Factory® software. Several measures are available, including accuracy (F1 Score), the area under the curve (AUC), and training time. The suggested technique has a classification accuracy of 97.1 per cent for both islanded and non-islanded events. However, the RF and ANN classifiers\u27 accuracies for islanding and non-islanding events, respectively, are proven to be 94.23 and 54.8 per cent, respectively. In terms of the training time, the ANN, RF, and CatBoost classifiers have training times of 1.4 seconds, 1.21 seconds, and 0.88 seconds, respectively. The detection time for all methods was less than one cycle. These metrics demonstrate that the suggested strategy is robust and capable of distinguishing between the islanding event and other system disruptions

    Application of spectral and spatial indices for specific class identification in Airborne Prism EXperiment (APEX) imaging spectrometer data for improved land cover classification

    Get PDF
    Hyperspectral remote sensing's ability to capture spectral information of targets in very narrow bandwidths gives rise to many intrinsic applications. However, the major limiting disadvantage to its applicability is its dimensionality, known as the Hughes Phenomenon. Traditional classification and image processing approaches fail to process data along many contiguous bands due to inadequate training samples. Another challenge of successful classification is to deal with the real world scenario of mixed pixels i.e. presence of more than one class within a single pixel. An attempt has been made to deal with the problems of dimensionality and mixed pixels, with an objective to improve the accuracy of class identification. In this paper, we discuss the application of indices to cope with the disadvantage of the dimensionality of the Airborne Prism EXperiment (APEX) hyperspectral Open Science Dataset (OSD) and to improve the classification accuracy using the Possibilistic c–Means (PCM) algorithm. This was used for the formulation of spectral and spatial indices to describe the information in the dataset in a lesser dimensionality. This reduced dimensionality is used for classification, attempting to improve the accuracy of determination of specific classes. Spectral indices are compiled from the spectral signatures of the target and spatial indices have been defined using texture analysis over defined neighbourhoods. The classification of 20 classes of varying spatial distributions was considered in order to evaluate the applicability of spectral and spatial indices in the extraction of specific class information. The classification of the dataset was performed in two stages; spectral and a combination of spectral and spatial indices individually as input for the PCM classifier. In addition to the reduction of entropy, while considering a spectral-spatial indices approach, an overall classification accuracy of 80.50% was achieved, against 65% (spectral indices only) and 59.50% (optimally determined principal component

    MOSAIC: A Multi-Objective Optimization Framework for Sustainable Datacenter Management

    Full text link
    In recent years, cloud service providers have been building and hosting datacenters across multiple geographical locations to provide robust services. However, the geographical distribution of datacenters introduces growing pressure to both local and global environments, particularly when it comes to water usage and carbon emissions. Unfortunately, efforts to reduce the environmental impact of such datacenters often lead to an increase in the cost of datacenter operations. To co-optimize the energy cost, carbon emissions, and water footprint of datacenter operation from a global perspective, we propose a novel framework for multi-objective sustainable datacenter management (MOSAIC) that integrates adaptive local search with a collaborative decomposition-based evolutionary algorithm to intelligently manage geographical workload distribution and datacenter operations. Our framework sustainably allocates workloads to datacenters while taking into account multiple geography- and time-based factors including renewable energy sources, variable energy costs, power usage efficiency, carbon factors, and water intensity in energy. Our experimental results show that, compared to the best-known prior work frameworks, MOSAIC can achieve 27.45x speedup and 1.53x improvement in Pareto Hypervolume while reducing the carbon footprint by up to 1.33x, water footprint by up to 3.09x, and energy costs by up to 1.40x. In the simultaneous three-objective co-optimization scenario, MOSAIC achieves a cumulative improvement across all objectives (carbon, water, cost) of up to 4.61x compared to the state-of-the-arts
    corecore