3,266 research outputs found

    UAVs for Enhanced Communication and Computation

    Get PDF

    Federated Learning with a Drone Orchestrator:Path Planning for Minimized Staleness

    Get PDF

    A policy-based architecture for virtual network embedding

    Full text link
    Network virtualization is a technology that enables multiple virtual instances to coexist on a common physical network infrastructure. This paradigm fostered new business models, allowing infrastructure providers to lease or share their physical resources. Each virtual network is isolated and can be customized to support a new class of customers and applications. To this end, infrastructure providers need to embed virtual networks on their infrastructure. The virtual network embedding is the (NP-hard) problem of matching constrained virtual networks onto a physical network. Heuristics to solve the embedding problem have exploited several policies under different settings. For example, centralized solutions have been devised for small enterprise physical networks, while distributed solutions have been proposed over larger federated wide-area networks. In this thesis we present a policy-based architecture for the virtual network embedding problem. By policy, we mean a variant aspect of any of the three (invariant) embedding mechanisms: physical resource discovery, virtual network mapping, and allocation on the physical infrastructure. Our architecture adapts to different scenarios by instantiating appropriate policies, and has bounds on embedding efficiency, and on convergence embedding time, over a single provider, or across multiple federated providers. The performance of representative novel and existing policy configurations are compared via extensive simulations, and over a prototype implementation. We also present an object model as a foundation for a protocol specification, and we release a testbed to enable users to test their own embedding policies, and to run applications within their virtual networks. The testbed uses a Linux system architecture to reserve virtual node and link capacities

    Flight deck automation: Promises and realities

    Get PDF
    Issues of flight deck automation are multifaceted and complex. The rapid introduction of advanced computer-based technology onto the flight deck of transport category aircraft has had considerable impact both on aircraft operations and on the flight crew. As part of NASA's responsibility to facilitate an active exchange of ideas and information among members of the aviation community, a NASA/FAA/Industry workshop devoted to flight deck automation, organized by the Aerospace Human Factors Research Division of NASA Ames Research Center. Participants were invited from industry and from government organizations responsible for design, certification, operation, and accident investigation of transport category, automated aircraft. The goal of the workshop was to clarify the implications of automation, both positive and negative. Workshop panels and working groups identified issues regarding the design, training, and procedural aspects of flight deck automation, as well as the crew's ability to interact and perform effectively with the new technology. The proceedings include the invited papers and the panel and working group reports, as well as the summary and conclusions of the conference

    Managing a Profitable Interactive Email Marketing Program: Modeling and Analysis

    Get PDF
    Despite the popularity of mobile and social media, email continues to be the marketing tool that brings the highest ROI, according to the Direct Marketing Associationā€™s ā€œPower of Directā€ (2011) study. An important reason for email marketingā€™s success is the application of an ideaā€” ā€œPermission Marketing,ā€ which asks marketers to seek consent from customers before sending them messages. Permission-based email marketing seeks to build a two-way interactive communication channel through which customers can engage with firms by expressing their interests, responding to firmsā€™ email messages and making purchases. This thesis consists of two essays that address several key questions that are related to the management of a profitable interactive permission-based email marketing program. Existing research has examined the drivers of customersā€™ opt-in and opt-out decisions, but it has investigated neither the timings of two decisions nor the influence of transactional activity on the length of time a customer stays with an email program. In the first essay, we adopt a multivariate copula model using a pair-copula construction method to jointly model opt-in time (from a customerā€™s first purchase to opt-in), opt-out time (from customer opt-in to opt-out) and average transaction amount. Through such multivariate dependences, this model significantly improves the predictive performance of the opt-out time in comparison with several benchmark models. The study offers several important findings (1) marketing intensity affects opt-in and opt-out times (2) customers with certain characteristics are more or less likely to opt-in or opt-out (3) firms can extend customer opt-out time and increase customer spending level by strategically allocating resources. Firms are using email marketing to engage with customers and encourage active transactional behavior. Extant research either focuses only on how customers respond to email messages or looks at the ā€œaverageā€ effect of email on transactional behavior. In the second essay, we consider not only customersā€™ response to emails and their correlated transactional behavior, but also the dynamics that govern the evolving of the two types of customer relationship: email-response and purchase relationships. We model the email open count with a Binomial distribution and the purchase count with a zero-inflated negative binomial model. We capture the dependence between the two discrete distributions using a copula approach. In addition, we develop a hidden Markov model to model the effects of email contacts on purchase behavior. We also allow the relationship that represents customersā€™ responsiveness to email marketing to evolve flexibly along with the relationship of purchase. In the second essay, we apply the proposed model in a non-contractual context where a retailer operates a large-scale email marketing program. Through the empirical study, we capture a positive dependence between the opening of emails and purchase behavior. We identify three purchase-behavior states along with three email-response states. The empirical finding suggests that the customers who are in the medium relationship state have the highest intrinsic propensity to open an email, followed by the customers in the lowest and highest relationship state. Furthermore, we derive a dynamic email marketing resource allocation policy using the hidden Markov model, the purchase and email open model estimates. We demonstrate that a forward-looking agent could maximize the long-term profits of its existing email subscribers

    Multipath Routing in Cloud Computing using Fuzzy based Multi-Objective Optimization System in Autonomous Networks

    Get PDF
    Intelligent houses and buildings, autonomous automobiles, drones, robots, and other items that are successfully incorporated into daily life are examples of autonomous systems and the Internet of Things (IoT) that have advanced as research areas. Secured data transfer in untrusted cloud applications has been one of the most significant requirements in the cloud in recent times. In order to safeguard user data from unauthorised users, encrypted data is stored on cloud servers. Existing techniques offer either security or efficiency for data transformation. They fail to retain complete security while undergoing significant changes. This research proposes novel technique in multipath routing based energy optimization of autonomous networks. The main goal of this research is to enhance the secure data transmission in cloud computing with network energy optimization. The secure data transmission is carried out using multi-authentication attribute based encryption with multipath routing protocol. Then the network energy has been optimized using multi-objective fuzzy based reinforcement learning. The experimental analysis has been carried out based on secure data transmission and energy optimization of the network. The parameters analysed in terms of scalability of 79%, QoS of 75%, encryption time of 42%, latency of 96%, energy efficiency of 98%, end-end delay of 45%

    Bayesian Network Approach to Assessing System Reliability for Improving System Design and Optimizing System Maintenance

    Get PDF
    abstract: A quantitative analysis of a system that has a complex reliability structure always involves considerable challenges. This dissertation mainly addresses uncertainty in- herent in complicated reliability structures that may cause unexpected and undesired results. The reliability structure uncertainty cannot be handled by the traditional relia- bility analysis tools such as Fault Tree and Reliability Block Diagram due to their deterministic Boolean logic. Therefore, I employ Bayesian network that provides a flexible modeling method for building a multivariate distribution. By representing a system reliability structure as a joint distribution, the uncertainty and correlations existing between systemā€™s elements can effectively be modeled in a probabilistic man- ner. This dissertation focuses on analyzing system reliability for the entire system life cycle, particularly, production stage and early design stages. In production stage, the research investigates a system that is continuously mon- itored by on-board sensors. With modeling the complex reliability structure by Bayesian network integrated with various stochastic processes, I propose several methodologies that evaluate system reliability on real-time basis and optimize main- tenance schedules. In early design stages, the research aims to predict system reliability based on the current system design and to improve the design if necessary. The three main challenges in this research are: 1) the lack of field failure data, 2) the complex reliability structure and 3) how to effectively improve the design. To tackle the difficulties, I present several modeling approaches using Bayesian inference and nonparametric Bayesian network where the system is explicitly analyzed through the sensitivity analysis. In addition, this modeling approach is enhanced by incorporating a temporal dimension. However, the nonparametric Bayesian network approach generally accompanies with high computational efforts, especially, when a complex and large system is modeled. To alleviate this computational burden, I also suggest to building a surrogate model with quantile regression. In summary, this dissertation studies and explores the use of Bayesian network in analyzing complex systems. All proposed methodologies are demonstrated by case studies.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Two-stage network design in humanitarian logistics.

    Get PDF
    Natural disasters such as floods and earthquakes can cause multiple deaths, injuries, and severe damage to properties. In order to minimize the impact of such disasters, emergency response plans should be developed well in advance of such events. Moreover, because different organizations such as non-governmental organizations (NGOs), governments, and militaries are involved in emergency response, the development of a coordination scheme is necessary to efficiently organize all the activities and minimize the impact of disasters. The logistics network design component of emergency management includes determining where to store emergency relief materials, the corresponding quantities and distribution to the affected areas in a cost effective and timely manner. In a two-echelon humanitarian relief chain, relief materials are pre-positioned first in regional rescue centers (RRCs), supply sources, or they are donated to centers. These materials are then shipped to local rescue centers (LRCs) that distribute these materials locally. Finally, different relief materials will be delivered to demand points (also called affected areas or AAs). Before the occurrence of a disaster, exact data pertaining to the origin of demand, amount of demand at these points, availability of routes, availability of LRCs, percentage of usable pre-positioned material, and others are not available. Hence, in order to make a location-allocation model for pre-positioning relief material, we can estimate data based on prior events and consequently develop a stochastic model. The outputs of this model are the location and the amount of pre-positioned material at each RRC as well as the distribution of relief materials through LRCs to demand points. Once the disaster occurs, actual values of the parameters we seek (e.g., demand) will be available. Also, other supply sources such as donation centers and vendors can be taken into account. Hence, using updated data, a new location-allocation plan should be developed and used. It should be mentioned that in the aftermath of the disaster, new parameters such as reliability of routes, ransack probability of routes and priority of singular demand points will be accessible. Therefore, the related model will have multiple objectives. In this dissertation, we first develop a comprehensive pre-positioning model that minimizes the total cost while considering a time limit for deliveries. The model incorporates shortage, transportation, and holding costs. It also considers limited capacities for each RRC and LRC. Moreover, it has the availability of direct shipments (i.e., shipments can be done from RRCs directly to AAs) and also has service quality. Because this model is in the class of two-stage stochastic facility location problems, it is NP-hard and should be solved heuristically. In order to solve this model, we propose using Lagrangian Heuristic that is based on Lagrangian Relaxation. Results from the first model are amounts and locations of pre-positioned relief materials as well as their allocation plan for each possible scenario. This information is then used as a part of the input for the second model, where the facility location problem will be formulated using real data. In fact, with pre-positioned items in hand, other supplies sources can be considered as necessary. The resulting multi-objective problem is formulated based on a widely used method called lexicography goal programming. The real-time facility location model of this dissertation is multi-product. It also considers the location problem for LRCs using real-time data. Moreover, it considers the minimization of the total cost as one of the objectives in the model and it has the availability of direct shipments. This model is also NP-hard and is solved using the Lagrangian Heuristic. One of the contributions of this dissertation is the development of Lagrangian Heuristic method for solving the pre-positioning and the real- time models. Based on the results of Lagrangian Heuristic for the pre-positioning model, almost all the deviations from optimal values are below 5%, which shows that the Heuristics works acceptably for the problem. Also, the execution times are no more than 780 seconds for the largest test instances. Moreover, for the real-time model, though not directly comparable, the solutions are fairly close to optimal and the execution time for the largest test instance is below 660 seconds. Hence, the efficiency of the heuristic for real-time model is satisfactory

    Status report on the NCRIS eResearch capability summary

    Get PDF
    Preface The period 2006 to 2014 has seen an approach to the national support of eResearch infrastructure by the Australian Government which is unprecedented. Not only has investment been at a significantly greater scale than previously, but the intent and approach has been highly innovative, shaped by a strategic approach to research support in which the critical element, the catchword, has been collaboration. The innovative directions shaped by this strategy, under the banner of the Australian Governmentā€™s National Collaborative Research Infrastructure Strategy (NCRIS), have led to significant and creative initiatives and activity, seminal to new research and fields of discovery. Origin This document is a Technical Report on the Status of the NCRIS eResearch Capability. It was commissioned by the Australian Government Department of Education and Training in the second half of 2014 to examine a range of questions and issues concerning the development of this infrastructure over the period 2006-2014. The infrastructure has been built and implemented over this period following investments made by the Australian Government amounting to over $430 million, under a number of funding initiatives

    Support for flexible and transparent distributed computing

    Get PDF
    Modern distributed computing developed from the traditional supercomputing community rooted firmly in the culture of batch management. Therefore, the field has been dominated by queuing-based resource managers and work flow based job submission environments where static resource demands needed be determined and reserved prior to launching executions. This has made it difficult to support resource environments (e.g. Grid, Cloud) where the available resources as well as the resource requirements of applications may be both dynamic and unpredictable. This thesis introduces a flexible execution model where the compute capacity can be adapted to fit the needs of applications as they change during execution. Resource provision in this model is based on a fine-grained, self-service approach instead of the traditional one-time, system-level model. The thesis introduces a middleware based Application Agent (AA) that provides a platform for the applications to dynamically interact and negotiate resources with the underlying resource infrastructure. We also consider the issue of transparency, i.e., hiding the provision and management of the distributed environment. This is the key to attracting public to use the technology. The AA not only replaces user-controlled process of preparing and executing an application with a transparent software-controlled process, it also hides the complexity of selecting right resources to ensure execution QoS. This service is provided by an On-line Feedback-based Automatic Resource Configuration (OAC) mechanism cooperating with the flexible execution model. The AA constantly monitors utility-based feedbacks from the application during execution and thus is able to learn its behaviour and resource characteristics. This allows it to automatically compose the most efficient execution environment on the fly and satisfy any execution requirements defined by users. Two policies are introduced to supervise the information learning and resource tuning in the OAC. The Utility Classification policy classifies hosts according to their historical performance contributions to the application. According to this classification, the AA chooses high utility hosts and withdraws low utility hosts to configure an optimum environment. The Desired Processing Power Estimation (DPPE) policy dynamically configures the execution environment according to the estimated desired total processing power needed to satisfy usersā€™ execution requirements. Through the introducing of flexibility and transparency, a user is able to run a dynamic/normal distributed application anywhere with optimised execution performance, without managing distributed resources. Based on the standalone model, the thesis further introduces a federated resource negotiation framework as a step forward towards an autonomous multi-user distributed computing world
    • ā€¦
    corecore