34 research outputs found

    Minimizing weighted mean absolute deviation of job completion times from their weighted mean

    Get PDF
    Cataloged from PDF version of article.We address a single-machine scheduling problem where the objective is to minimize the weighted mean absolute deviation of job completion times from their weighted mean. This problem and its precursors aim to achieve the maximum admissible level of service equity. It has been shown earlier that the unweighted version of this problem is NP-hard in the ordinary sense. For that version, a pseudo-polynomial time dynamic program and a 2- approximate algorithm are available. However, not much (except for an important solution property) exists for the weighted version. In this paper, we establish the relationship between the optimal solution to the weighted problem and a related one in which the deviations are measured from the weighted median (rather than the mean) of the job completion times; this generalizes the 2-approximation result mentioned above. We proceed to give a pseudo-polynomial time dynamic program, establishing the ordinary NP-hardness of the problem in general. We then present a fully-polynomial time approximation scheme as well. Finally, we report the findings from a limited computational study on the heuristic solution of the general problem. Our results specialize easily to the unweighted case; they also lead to an approximation of the set of schedules that are efficient with respect to both the weighted mean absolute deviation and the weighted mean completion time. 2011 Elsevier Inc. All rights reserved

    FPTAS for half-products minimization with scheduling applications

    Get PDF
    Cataloged from PDF version of article.A special class of quadratic pseudo-boolean functions called “half-products” (HP) has recently been introduced. It has been shown that HP minimization, while NP-hard, admits a fully polynomial time approximation scheme (FPTAS). In this note, we provide a more efficient FPTAS. We further show how an FPTAS can also be derived for the general case where the HP function is augmented by a problem-dependent constant and can justifiably be assumed to be nonnegative. This leads to an FPTAS for certain partitioning type problems, including many from the field of scheduling. c 2008 Elsevier B.V. All rights reserved

    Assignment Algorithms for Multi-Robot Task Allocation in Uncertain and Dynamic Environments

    Get PDF
    Multi-robot task allocation is a general approach to coordinate a team of robots to complete a set of tasks collectively. The classical works adopt relevant theories from other disciplines (e.g., operations research, economics), but oftentimes they are not adequately rich to deal with the properties from the robotics domain such as perception that is local and communication which is limited. This dissertation reports the efforts on relaxing the assumptions, making problems simpler and developing new methods considering the constraints or uncertainties in robot problems. We aim to solve variants of classical multi-robot task allocation problems where the team of robots operates in dynamic and uncertain environments. In some of these problems, it is adequate to have a precise model of nondeterministic costs (e.g., time, distance) subject to change at run-time. In some other problems, probabilistic or stochastic approaches are adequate to incorporate uncertainties into the problem formulation. For these settings, we propose algorithms that model dynamics owing to robot interactions, new cost representations incorporating uncertainty, algorithms specialized for the representations, and policies for tasks arriving in an online manner. First, we consider multi-robot task assignment problems where costs for performing tasks are interrelated, and the overall team objective need not be a standard sum-of costs (or utilities) model, enabling straightforward treatment of the additional costs incurred by resource contention. In the model we introduce, a team may choose one of a set of shared resources to perform a task (e.g., several routes to reach a destination), and resource contention is modeled when multiple robots use the same resource. We propose efficient task assignment algorithms that model this contention with different forms of domain knowledge and compute an optimal assignment under such a model. Second, we address the problem of finding the optimal assignment of tasks to a team of robots when the associated costs may vary, which arises when robots deal with uncertain situations. We propose a region-based cost representation incorporating the cost uncertainty and modeling interrelationships among costs. We detail how to compute a sensitivity analysis that characterizes how much costs may change before optimality is violated. Using this analysis, robots are able to avoid unnecessary re-assignment computations and reduce global communication when costs change. Third, we consider multi-robot teams operating in probabilistic domains. We represent costs by distributions capturing the uncertainty in the environment. This representation also incorporates inter-robot couplings in planning the team’s coordination. We do not have the assumption that costs are independent, which is frequently used in probabilistic models. We propose algorithms that help in understanding the effects of different characterizations of cost distributions such as mean and Conditional Value-at-Risk (CVaR), in which the latter assesses the risk of the outcomes from distributions. Last, we study multi-robot task allocation in a setting where tasks are revealed sequentially and where it is possible to execute bundles of tasks. Particularly, we are interested in tasks that have synergies so that the greater the number of tasks executed together, the larger the potential performance gain. We provide an analysis of bundling, giving an understanding of the important bundle size parameter. Based on the qualitative basis, we propose multiple simple bundling policies that determine how many tasks the robots bundle for a batched planning and execution

    Development of Application Specific Clustering Protocols for Wireless Sensor Networks

    Get PDF
    Applications in wireless sensor networks (WSNs) span over various areas like weather forecasting to measuring soil parameters in agriculture, and from battle_eld to health monitoring. Constrained battery power of sensor nodes make the network design a challenging task. Amongst several research areas in WSN, designing energy e_cient protocols is a prominent area. Clustering is a proven solution to enhance the network lifetime by utilizing the availablebattery power e_ciently. In this thesis, a hypothetical overview has been done to study the strengths and weaknesses of existing clustering algorithms that inspired the design of distributed and energy e_cient clustering in WSN. Distributed Dynamic Clustering Protocol (DDCP) has been proposed to allow all the nodes to take part in the cluster formation scheme and data transmission process. This protocol consists of a cluster-head selection algorithm, a cluster formation scheme and a routing algorithm for the data transmission between cluster-heads and the base station. All the sensor nodes present in the network takes part in the cluster-head selection process. Staggered Clustering Protocol (SCP) has been proposed to develop a new energy e_cient clustering protocol for WSN. This algorithm is aiming at choosing cluster-heads that ensure both the intra-cluster data transmission and inter-cluster data transmission are energy-e_cient. The cluster formation scheme is accomplished by exchanging messages between non-cluster-head nodes and the cluster-head to ensure a balanced energy loadamong cluster-heads. An energy e_cient clustering algorithm for wireless sensor networks using particle swarm optimization (EEC-PSO) has been proposed to ensure energy e_ciency by creating optimized number of clusters. It also improves the link quality among the cluster-heads with the cluster member nodes. Finding a set of suitable cluster-heads from N sensor nodes is considered as non-deterministic polynomial (NP)-hard optimization problem. The application of WSN in brain computer interface (BCI) has been proposed to detect the drowsiness of a driver on wheels. The sensors placed in a braincap worn by the driver are divided into small clusters. Then the sensed data, known as EEG signal, are transferred towards the base station through the cluster-heads. The base station may be placed at a nearby location of the driver. The received data is processed to take a decision when to trigger the warning tone

    Advancements in Measuring and Modeling the Mechanical and Hydrological Properties of Snow and Firn: Multi-sensor Analysis, Integration, and Algorithm Development

    Get PDF
    Estimating snow mechanical properties – such as elastic modulus, stiffness, and strength – is important for understanding how effectively a vehicle can travel over snow-covered terrain. Vehicle instrumentation data and observations of the snowpack are valuable for improving the estimates of winter vehicle performance. Combining in-situ and remotely-sensed snow observations, driver input, and vehicle performance sensors requires several techniques of data integration. I explored correlations between measurements spanning from millimeter to meter scales, beginning with the SnowMicroPenetrometer (SMP) and instruments applied to snow that were designed for measuring the load bearing capacity and the compressive and shear strengths of roads and soils. The spatial distribution of snow’s mechanical properties is still largely unknown. From this initial work, I determined that snow density remains a useful proxy for snowpack strength. To measure snow density, I applied multi-sensor electromagnetic methods. Using spatially distributed snowpack, terrain, and vegetation information developed in the subsequent chapters, I developed an over-snow vehicle performance model. To measure the vehicle performance, I joined driver and vehicle data in the coined Normalized Difference Mobility Index (NDMI). Then, I applied regression methods to distribute NDMI from spatial snow, terrain, and vegetation properties. Mobility prediction is useful for the strategic advancement of warfighting in cold regions. The security of water resources is climatologically inequitable and water stress causes international conflict. Water resources derived from snow are essential for modern societies in climates where snow is the predominant source of precipitation, such as the western United States. Snow water equivalent (SWE) is a critical parameter for yearly water supply forecasting and can be calculated by multiplying the snow depth by the snow density. In this work, I combined high-spatial resolution light detection and ranging (LiDAR) measured snow depths with ground-penetrating radar (GPR) measurements of two-way travel-time (TWT) to solve for snow density. Then using LiDAR derived terrain and vegetation features as predictors in a multiple linear regression, the density observations are distributed across the SnowEx 2020 study area at Grand Mesa, Colorado. The modeled density resolved detailed patterns that agree with the known interactions of snow with wind, terrain, and vegetation. The integration of radar and LiDAR sensors shows promise as a technique for estimating SWE across entire river basins and evaluating observational- or physics-based snow-density models. Accurate estimation of SWE is a means of water security. In our changing climate, snow and ice mass are being permanently lost from the cryosphere. Mass balance is an indicator of the (in)stability of glaciers and ice sheets. Surface mass balance (SMB) may be estimated by multiplying the thickness of any annual snowpack layer by its density. Though, unlike applications in seasonal snowpack, the ages of annual firn layers are unknown. To estimate SMB, I modeled the firn depth, density, and age using empirical and numerical approaches. The annual SMB history shows cyclical patterns representing the combination of atmospheric, oceanic, and anthropogenic climate forcing, which may serve as evaluation or assimilation data in climate model retrievals of SMB. The advancements made using the SMP, multi-channel GPR arrays, and airborne LiDAR and radar within this dissertation have made it possible to spatially estimate the snow depth, density, and water equivalent in seasonal snow, glaciers, and ice sheets. Open access, process automation, repeatability, and accuracy were key design parameters of the analyses and algorithms developed within this work. The many different campaigns, objectives, and outcomes composing this research documented the successes and limitations of multi-sensor estimation techniques for a broad range of cryosphere applications

    Inversion Methods in Atmospheric Remote Sounding

    Get PDF
    The mathematical theory of inversion methods is applied to the remote sounding of atmospheric temperature, humidity, and aerosol constituents

    Interstitial-Scale Modeling of Packed-Bed Reactors

    Get PDF
    Packed-beds are common to adsorption scrubbers, packed bed reactors, and trickle-bed reactors widely used across the petroleum, petrochemical, and chemical industries. The micro structure of these packed beds is generally very complex and has tremendous influence on heat, mass, and momentum transport phenomena on the micro and macro length scales within the bed. On a reactor scale, bed geometry strongly influences overall pressure drop, residence time distribution, and conversion of species through domain-fluid interactions. On the interstitial scale, particle boundary layer formation, fluid to particle mass transfer, and local mixing are controlled by turbulence and dissipation existing around packed particles. In the present research, a CFD model is developed using OpenFOAM: www.openfoam.org) to directly resolve momentum and scalar transport in both laminar and turbulent flow-fields, where the interstitial velocity field is resolved using the Navier-Stokes equations: i.e. no pseudo-continuum based assumptions. A discussion detailing the process of generating the complex domain using a Monte-Carlo packing algorithm is provided, along with relevant details required to generate an arbitrary polyhedral mesh describing the packed-bed. Lastly, an algorithm coupling OpenFOAM with a linear system solver using the graphics processing unit: GPU) computing paradigm was developed and will be discussed in detail

    The role of experience in common sense and expert problem solving

    Get PDF
    Issued as Progress reports [nos. 1-5], Reports [nos. 1-6], and Final report, Project no. G-36-617 (includes Projects nos. GIT-ICS-87/26, GIT-ICS-85/19, and GIT-ICS-85/18

    Control of multiclass queueing systems with abandonments and adversarial customers

    Get PDF
    This thesis considers the defensive surveillance of multiple public areas which are the open, exposed targets of adversarial attacks. We address the operational problem of identifying a real time decision-making rule for a security team in order to minimise the damage an adversary can inflict within the public areas. We model the surveillance scenario as a multiclass queueing system with customer abandonments, wherein the operational problem translates into developing service policies for a server in order to minimise the expected damage an adversarial customer can inflict on the system. We consider three different surveillance scenarios which may occur in realworld security operations. In each scenario it is only possible to calculate optimal policies in small systems or in special cases, hence we focus on developing heuristic policies which can be computed and demonstrate their effectiveness in numerical experiments. In the random adversary scenario, the adversary attacks the system according to a probability distribution known to the server. This problem is a special case of a more general stochastic scheduling problem. We develop new results which complement the existing literature based on priority policies and an effective approximate policy improvement algorithm. We also consider the scenario of a strategic adversary who chooses where to attack. We model the interaction of the server and adversary as a two-person zero-sum game. We develop an effective heuristic based on an iterative algorithm which populates a small set of service policies to be randomised over. Finally, we consider the scenario of a strategic adversary who chooses both where and when to attack and formulate it as a robust optimisation problem. In this case, we demonstrate the optimality of the last-come first-served policy in single queue systems. In systems with multiple queues, we develop effective heuristic policies based on the last-come first-served policy which incorporates randomisation both within service policies and across service policies
    corecore