83,670 research outputs found

    Energy Efficiency in Cache Enabled Small Cell Networks With Adaptive User Clustering

    Full text link
    Using a network of cache enabled small cells, traffic during peak hours can be reduced considerably through proactively fetching the content that is most probable to be requested. In this paper, we aim at exploring the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the correlation in user content popularity profiles in addition to the spatial repartitions of users with comparable request patterns, can result in considerably improving the achievable energy efficiency of the network. In this paper, the problem of optimizing EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users in the network, we consider an alternative caching framework in which, users are clustered according to their content popularity profiles. In order to showcase the utility of the proposed clustering scheme, we use a statistical model selection criterion, namely Akaike information criterion (AIC). Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users with comparable request patterns. After considering a snapshot of the network, we formulate a combinatorial optimization problem that enables to optimize content placement such that the used transmission power is minimized. Numerical results show that the clustering scheme enable to considerably improve the cache hit probability and consequently the EE compared with an unclustered approach. Simulations also show that the small base station allocation algorithm results in improving the energy efficiency and hit probability.Comment: 30 pages, 5 figures, submitted to Transactions on Wireless Communications (15-Dec-2016

    Resilience-oriented design and proactive preparedness of electrical distribution system

    Get PDF
    Extreme weather events, such as hurricanes and ice storms, pose a top threat to power distribution systems as their frequency and severity increase over time. Recent severe power outages caused by extreme weather events, such as Hurricane Harvey and Hurricane Irma, have highlighted the importance and urgency to enhance the resilience of electric power distribution systems. The goal of enhancing the resilience of distribution systems against extreme weather events can be fulfilled through upgrading and operating measures. This work focuses on investigating the impacts of upgrading measures and preventive operational measures on distribution system resilience. The objective of this dissertation is to develop a multi-timescale optimization framework to provide some actionable resilience-enhancing strategies for utility companies to harden/upgrade power distribution systems in the long-term and do proactive preparation management in the short-term. In the long-term resilience-oriented design (ROD) of distribution system, the main challenges are i) modeling the spatio-temporal correlation among ROD decisions and uncertainties, ii) capturing the entire failure-recovery-cost process, and iii) solving the resultant large-scale mixed-integer stochastic problem efficiently. To deal with these challenges, we propose a hybrid stochastic process with a deterministic casual structure to model the spatio-temporal correlations of uncertainties. A new two-stage stochastic mixed-integer linear program (MILP) is formulated to capture the impacts of ROD decisions and uncertainties on system responses to extreme weather events. The objective is to minimize the ROD investment cost in the first stage and the expected costs of loss of load, DG operation, and damage repairs in the second stage. A dual decomposition (DD) algorithm with branch-and-bound is developed to solve the proposed model with binary variables in both stages. Case studies on the IEEE 123-bus test feeder have shown the proposed approach can improve the system resilience at minimum costs. For an upcoming extreme weather event, we develop a pre-event proactive energy management and preparation strategy such that flexible resources can be prepared in advance. In order to explicitly materialize the trade-off between the pre-event resource allocation cost and the damage loss risk associated with an event, the strategy is modeled a two-stage stochastic mixed-integer linear programming (SMILP) and Conditional Value at-Risk (CVaR). The progressive algorithm is used to solve the proposed model and obtain the optimal proactive energy management and preparation strategy. Numerical studies on the modified IEEE 123-bus test feeder show the effectiveness of the proposed approach to improve the system resilience at different risk levels

    Towards Operator-less Data Centers Through Data-Driven, Predictive, Proactive Autonomics

    Get PDF
    Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using live data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating predictive models for node failures. Our results support the practicality of a data-driven approach by showing the effectiveness of predictive models based on data found in typical data center logs. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing node state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if nodes will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%.This level of performance allows us to recover large fraction of jobs' executions (by redirecting them to other nodes when a failure of the present node is predicted) that would otherwise have been wasted due to failures. [...

    How will the Internet of Things enable Augmented Personalized Health?

    Full text link
    Internet-of-Things (IoT) is profoundly redefining the way we create, consume, and share information. Health aficionados and citizens are increasingly using IoT technologies to track their sleep, food intake, activity, vital body signals, and other physiological observations. This is complemented by IoT systems that continuously collect health-related data from the environment and inside the living quarters. Together, these have created an opportunity for a new generation of healthcare solutions. However, interpreting data to understand an individual's health is challenging. It is usually necessary to look at that individual's clinical record and behavioral information, as well as social and environmental information affecting that individual. Interpreting how well a patient is doing also requires looking at his adherence to respective health objectives, application of relevant clinical knowledge and the desired outcomes. We resort to the vision of Augmented Personalized Healthcare (APH) to exploit the extensive variety of relevant data and medical knowledge using Artificial Intelligence (AI) techniques to extend and enhance human health to presents various stages of augmented health management strategies: self-monitoring, self-appraisal, self-management, intervention, and disease progress tracking and prediction. kHealth technology, a specific incarnation of APH, and its application to Asthma and other diseases are used to provide illustrations and discuss alternatives for technology-assisted health management. Several prominent efforts involving IoT and patient-generated health data (PGHD) with respect converting multimodal data into actionable information (big data to smart data) are also identified. Roles of three components in an evidence-based semantic perception approach- Contextualization, Abstraction, and Personalization are discussed

    Proactive and politically skilled professionals: What is the relationship with affective occupational commitment?

    Get PDF
    The aim of this study is to extend research on employee affective commitment in three ways: (1) instead of organizational commitment the focus is on occupational commitment; (2) the role of proactive personality on affective occupational commitment is examined; and (3) occupational satisfaction is examined as a mediator and political skills as moderator in the relationship between proactive personality and affective occupational commitment. Two connected studies, one in a hospital located in the private sector and one in a university located in the public sector, are carried out in Pakistan, drawing on a total sample of over 400 employees. The results show that proactive personality is positively related to affective occupational commitment, and that occupational satisfaction partly mediates the relationship between proactive personality and affective occupational commitment. No effect is found for a moderator effect of political skills in the relationship between proactive personality and affective occupational commitment. Political skills however moderate the relationship between proactive personality and affective organizational commitment

    2D Proactive Uplink Resource Allocation Algorithm for Event Based MTC Applications

    Full text link
    We propose a two dimension (2D) proactive uplink resource allocation (2D-PURA) algorithm that aims to reduce the delay/latency in event-based machine-type communications (MTC) applications. Specifically, when an event of interest occurs at a device, it tends to spread to the neighboring devices. Consequently, when a device has data to send to the base station (BS), its neighbors later are highly likely to transmit. Thus, we propose to cluster devices in the neighborhood around the event, also referred to as the disturbance region, into rings based on the distance from the original event. To reduce the uplink latency, we then proactively allocate resources for these rings. To evaluate the proposed algorithm, we analytically derive the mean uplink delay, the proportion of resource conservation due to successful allocations, and the proportion of uplink resource wastage due to unsuccessful allocations for 2D-PURA algorithm. Numerical results demonstrate that the proposed method can save over 16.5 and 27 percent of mean uplink delay, compared with the 1D algorithm and the standard method, respectively.Comment: 6 pages, 6 figures, Published in 2018 IEEE Wireless Communications and Networking Conference (WCNC

    Institutional Profiles and Entrepreneurship Orientation: A Case of Turkish Graduate Students

    Get PDF
    In this study I aimed to describe and explain which factors affect entrepreneurship orientation. I expect that proactivity and entrepreneurial behavior is directly related, and country’s institutional factors play very important role on the entrepreneurship orientation. Therefore, I defined institutional profiles and entrepreneurship's characteristics. Institutional profiles are classified into three main dimensions which are based on Kostava’s (1997) research; cognitive, regulatory and, normative dimensions. On the other hand, following Kostova, I modified questionnaire to adopt Turkish culture and I articulated and measured these dimensions. To define operationally the perceived institutional profiles for entrepreneurship, I generated a large pool of items, as well. The institutional profile dimensions include thirty-four items; eleven for regulatory dimension, eight for cognitive dimension and fifteen items for normative dimension. On the other hand, my focus in this study is on the measurement and correlation of proactive behavior as a personal disposition relative to the stable behavioral tendency. Proactive person searches for opportunities, takes initiative, acts, but the proactive dimension of behavior is essentially rooted in people's needs to manipulate and control the environment. At the same time, the prospective entrepreneur's interpretation of the environment is also moderated by his/her beliefs about the environment. From this point of view, I selected my research sample among graduate students who are included 170 person, accepting them as potential investors. For the proactive personality measurement, which includes seventeen items, Bateman and Crant's (1993) questionnaire was used. Furthermore, data reliability was tested before the analysis has begun. To conclude, the findings of the research were discussed.Entrepreneurship orientation, institutional profiles, culture

    Towards Data-Driven Autonomics in Data Centers

    Get PDF
    Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%. We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website.Comment: 12 pages, 6 figure
    • …
    corecore