366 research outputs found
A bi-objective model for the single-machine scheduling problem with rejection cost and total tardiness minimization
We study the problem of scheduling jobs on a single machine with a rejection possibility, concurrently minimizing the total tardiness of the scheduled jobs and the total cost of the rejected ones. The model we consider is fully bi-objective, i.e. its aim is to enumerate the Pareto front. We tackle the problem both with and without the presence of hard deadlines. For the case without deadlines, we provide a pseudo-polynomial time algorithm, based on the dynamic program of Steiner and Zhang (2011), thereby proving that the problem is weakly NP-hard. For the case with deadlines, we propose a branch-and-bound algorithm and prove its efficiency by comparing it to an \u3b5-constrained approach on benchmark instances based on those proposed in the literature on similar problems
Better and faster solutions for the maximum diversity problem
The aim of the Maximum Diversity Problem (MDP) is to extract a subset M of given cardinality from a set of elements N, in such a way that the sum of the pairwise distances between the elements of M is maximum. This problem, introduced by Glover [7], has been deeply studied using GRASP methodologies [6, 1, 17, 2, 16]. Usually, effective algorithms owe their success more to the careful exploitation of problem-specific features than to the application of general-purpose methods. A solution for MDP has a very simple structure which can not be exploited for sophisticated neighborhood search. This paper explores the performance of three alternative solution approaches, that is Tabu Search, Variable Neighborhood Search and Scatter Search, comparing them with those of best GRASP algorithms in literature. We also focus our attention on the comparison of these three methods applied in their pure form
Using Speculative Computation and Parallelizing Techniques to Improve Scheduling of Control based Designs
partially_open5Recent research results have seen the application of parallelizing techniques to high-level synthesis. In particular, the effect of speculative code transformations on mixed control-data flow designs has demonstrated effective results on schedule lengths. In this paper we first analyze the use of the control and data dependence graph as an intermediate representation that provides the possibility of extracting the maximum parallelism. Then we analyze the scheduling problem by formulating an approach based on Integer Linear Programming (ILP) to minimize the number of control steps given the amount of resources. We improve the already proposed ILP scheduling approaches by introducing a new conditional resource sharing constraint which is then extended to the case of speculative computation. The ILP formulation has been solved by using a Branch and Cut framework which provides better results than standard branch and bound techniquesR. Cordone; F. Ferrandi; G. Palermo; M. Santambrogio; D. SciutoR., Cordone; Ferrandi, Fabrizio; Palermo, Gianluca; Santambrogio, MARCO DOMENICO; Sciuto, Donatell
Probabilistic performance modelling when using partial reconfiguration to accelerate streaming applications with non-deterministic task scheduling
Many streaming applications composed of multiple tasks self-adapt their tasksâ execution at runtime as response to the processed data. This type of application promises a better solution to context switches at the cost of a non-deterministic task scheduling. Partial reconfiguration is a unique feature of FPGAs that not only offers a higher resource reuse but also performance improvements when properly applied. In this paper, a probabilistic approach is used to estimate the acceleration of streaming applications with unknown task schedule thanks to the application of partial reconfiguration. This novel approach provides insights in the feasible acceleration when partially reconfiguring regions of the FPGA are partially reconfigured in order to exploit the available resources by processing multiple tasks in parallel. Moreover, the impact of how different strategies or heuristics affect to the final performance is included in this analysis. As a result, not only an estimation of the achievable acceleration is obtained, but also a guide at the design stage when searching for the highest performance
Molecular dynamics simulation of aqueous solutions of 26-unit segments of p(NIPAAm) and of p(NIPAAm) "doped" with amino acid based comonomers
We have performed 75-ns molecular dynamics (MD) simulations of aqueous solutions of a 26-unit NIPAAm
oligomer at two temperatures, 302 and 315 K, below and above the experimentally determined lower critical
solution temperature (LCST) of p(NIPAAm). We have been able to show that at 315 K the oligomer assumes
a compact form, while it keeps a more extended form at 302 K. A similar behavior has been demonstrated
for a similar NIPAAm oligomer, where two units had been substituted by methacryloyl-l-valine (MAVA)
comonomers, one of them being charged and one neutral. For another analogous oligomer, where the same
units had been substituted by methacryloyl-l-leucine (MALEU) comonomers, no transition from the extended
to the more compact conformation has been found within the same simulation time. Statistical analysis of the
trajectories indicates that this transition is related to the dynamics of the oligomer backbone, and to the formation
of intramolecular hydrogen bonds and water-bridges between distant units of the solute. In the MAVA case,
we have also evidenced an important role of the neutral MAVA comonomer in stabilizing the compact coiled
structure. In the MALEU case, the corresponding comonomer is not equally efficacious and, possibly, is
even hindering the readjustment of the oligomer backbone. Finally the self-diffusion coefficient of water
molecules surrounding the oligomers at the two temperatures for selected relevant times is observed to
characteristically depend on the distance from the solute molecules
Toward Smart Building Design Automation: Extensible CAD Framework for Indoor Localization Systems Deployment
Over the last years, many smart buildings applications, such as indoor localization or safety systems, have been subject of intense research. Smart environments usually rely on several hardware nodes equipped with sensors, actuators, and communication functionalities. The high level of heterogeneity and the lack of standardization across technologies make design of such environments a very challenging task, as each installation has to be designed manually and performed ad-hoc for the specific building. On the other hand, many different systems show common characteristics, like the strict dependency with the building floor plan, also sharing similar requirements such as a nodes allocation that provides sensing coverage and nodes connectivity. This paper provides a computer-aided design application for the design of smart building systems based on the installation of hardware nodes across the indoor space. The tool provides a site-specific algorithm for cost-effective deployment of wireless localization systems, with the aim to maximize the localization accuracy. Experimental results from real-world environment show that the proposed site-specific model can improve the positioning accuracy of general models from the state-of-the-art. The tool, available open-source, is modular and extensible through plug-ins allowing to model building systems with different requirements
The Spectrum of Integrated Millimeter Flux of the Magellanic Clouds and 30-Doradus from TopHat and DIRBE Data
We present measurements of the integrated flux relative to the local
background of the Large and Small Magellanic Clouds and the region 30-Doradus
(the Tarantula Nebula) in the LMC in four frequency bands centered at 245, 400,
460, and 630 GHz, based on observations made with the TopHat telescope. We
combine these observations with the corresponding measurements for the DIRBE
bands 8, 9, and 10 to cover the frequency range 245 - 3000 GHz (100 - 1220
micrometers) for these objects. We present spectra for all three objects and
fit these spectra to a single-component greybody emission model and report
best-fit dust temperatures, optical depths, and emissivity power-law indices,
and we compare these results with other measurements in these regions and
elsewhere. Using published dust grain opacities, we estimate the mass of the
measured dust component in the three regions.Comment: 41 pages, 4 figures. Accepted for publication in Astrophysical
Journa
Some polynomial special cases for the Minimum Gap Graph Partitioning Problem
We study various polynomial special cases for the problem of partitioning a vertex-weighted undirected graph into p connected subgraphs with minimum gap between the largest and the smallest vertex weight
Uniform partition of graphs: Complexity results, algorithms and formulations
In this presentation, we address centered and non centered equipartition problems on graphs into p connected components (p-partitions). In the former case, each class of the partition must contain exactly one special vertex called center, whereas in the latter, partitions are not required to fulfil this condition. Among the different equipartition problems considered in the literature, we focus on: 1) Most Uniform Partition (MUP) and 2) Uniform Partition (UP). Both criteria are defined either w.r.t. weights assigned to each vertex or to costs assigned to each vertex-center pair. Costs are assumed to be flat, i.e., they are independent of the topology of the graph. With respect to costs, MUP minimizes the difference between the maximum and minimum cost of the components of a partition and UP refers to optimal min-max or max-min partitions. Additionally, we present various problems of partitioning a vertex-weighted undirected graph into p connected components minimizing the gap that is a measure related to the difference between the largest and the smallest vertex weight in the component of the partition.
For all the problems considered here, we provide polynomial time algorithms, as well as, NP-complete results even on very special classes of graphs like trees. For the centered partitioning problems, we also present a new mathematical programming formulation that can be compared with the ones already provided in the literature for similar problems
Effects of macroalgae loss in an Antarctic marine food web: applying extinction thresholds to food web studies
Antarctica is seriously affected by climate change, particularly at the Western Antarctic Peninsula (WAP) where a rapid regional warming is observed. Potter Cove is a WAP fjord at Shetland Islands that constitutes a biodiversity hotspot where over the last years, Potter Cove annual air temperatures averages increased by 0.66 °C, coastal glaciers declined, and suspended particulate matter increased due to ice melting. Macroalgae are the main energy source for all consumers and detritivores of Potter Cove. Some effects of climate change favor pioneer macroalgae species that exploit new ice-free areas and can also decline rates of photosynthesis and intensify competition between species due to the increase of suspended particulate matter. In this study, we evaluated possible consequences of climate change at Potter Cove food web by simulating the extinction of macroalgae and detritus using a topological approach with thresholds of extinction. Thresholds represent the minimum number of incoming links necessary for speciesâ survival. When we simulated the extinctions of macroalgae species at random, a threshold of extinction beyond 50% was necessary to obtain a significant number of secondary extinctions, while with a 75% threshold a real collapse of the food web occurred. Our results indicate that Potter Cove food web is relative robust to macroalgae extinction. This is dramatically different from what has been found in other food webs, where the reduction of 10% in prey intake caused a disproportionate increase of secondary extinctions. Robustness of the Potter Cove food web was mediated by omnivory and redundancy, which had an important relevance in this food web. When we eliminated larger-biomass species more secondary extinctions occurred, a similar response was observed when more connected species were deleted, yet there was no correlation between species of larger-biomass and high-degree. This similarity could be explained because both criteria involved key species that produced an emerging effect on the food web. In this way, large-biomass and high-degree species could be acting as source for species with few trophic interactions or low redundancy. Based on this work, we expect the Potter Cove food web to be robust to changes in macroalgae species caused by climate change until a high threshold of stress is reached, and then negative effects are expected to spread through the entire food web leading to its collapse
- âŠ