475 research outputs found

    Scheduling spacecraft operations

    Get PDF
    A prototype scheduling system named MAESTRO currently under development is being used to explore possible approaches to the spacecraft operations scheduling problem. Results indicate that the appropriate combination of heuristic and other techniques can provide an acceptable solution to the scheduling problem over a wide range of operational scenarios and management approaches. These can include centralized or distributed instrument or systems control, batch or incremental scheduling, scheduling loose resource envelopes or exact profiles, and scheduling with varying degrees of user intervention. Techniques used within MAESTRO to provide this flexibility and power include constraint propagation mechanisms, multiple asynchronous processes, prioritized transaction-based command management, resource opportunity calculation, user-alterable selection and placement mechanisms, and maintenance of multiple schedules and resource profiles. These techniques and scheduling complexities requiring them are discussed

    Poster Abstract: Hierarchical Subchannel Allocation for Mode-3 Vehicle-to-Vehicle Sidelink Communications

    Get PDF
    In V2V Mode-3, eNodeBs assign subchannels to vehicles in order for them to periodically broadcast CAM messages \cite{b2}. A crucial aspect is to ensure that vehicles in the same cluster will broadcast in orthogonal time subchannels\footnote{A subchannel is a time-frequency resource chunk capable of sufficiently conveying a CAM message.} to avoid conflicts. In general, resource/subchannel allocation problems can be represented as weighted bipartite graphs. However, in this scenario there is an additional time orthogonality constraint which cannot be straightforwardly handled by conventional graph matching methods \cite{b3}. Thus, in our approach the mentioned constraint has been taken into account. We also perform the allocation task in a sequential manner based on the constrainedness of each cluster. To illustrate the gist of the problem, in Fig. 1 we show two partially overlapping clusters where a conflict between vehicles V8V_8 and V10V_{10} is generated as the allotted subchannels are in the same subframe

    Trying again to fail-first

    Get PDF
    For constraint satisfaction problems (CSPs), Haralick and Elliott [1] introduced the Fail-First Principle and defined in it terms of minimizing branch depth. By devising a range of variable ordering heuristics, each in turn trying harder to fail first, Smith and Grant [2] showed that adherence to this strategy does not guarantee reduction in search effort. The present work builds on Smith and Grant. It benefits from the development of a new framework for characterizing heuristic performance that defines two policies, one concerned with enhancing the likelihood of correctly extending a partial solution, the other with minimizing the effort to prove insolubility. The Fail-First Principle can be restated as calling for adherence to the second, fail-first policy, while discounting the other, promise policy. Our work corrects some deficiencies in the work of Smith and Grant, and goes on to confirm their finding that the Fail-First Principle, as originally defined, is insufficient. We then show that adherence to the fail-first policy must be measured in terms of size of insoluble subtrees, not branch depth. We also show that for soluble problems, both policies must be considered in evaluating heuristic performance. Hence, even in its proper form the Fail-First Principle is insufficient. We also show that the ā€œFFā€ series of heuristics devised by Smith and Grant is a powerful tool for evaluating heuristic performance, including the subtle relations between heuristic features and adherence to a policy

    A tabu search procedure for developing robust predicitive project schedules.

    Get PDF
    Proactive scheduling aims at the generation of robust baseline schedules that are as much as possible protected against disruptions that may occur during project execution. In this paper, we focus on disruptions caused by stochastic resource availabilities and aim at generating stable baseline schedules. A scheduleā€™s robustness (stability) is measured by the weighted deviation between the planned and the actually realized activity starting times during project execution. We present a tabu search procedure that operates on a surrogate, free slack based objective function. Its effectiveness is demonstrated by extensive computational results obtained on a set of randomly generated test instances.Project scheduling; Robustness; Proactive; Stability;

    Extremal Optimization at the Phase Transition of the 3-Coloring Problem

    Full text link
    We investigate the phase transition of the 3-coloring problem on random graphs, using the extremal optimization heuristic. 3-coloring is among the hardest combinatorial optimization problems and is closely related to a 3-state anti-ferromagnetic Potts model. Like many other such optimization problems, it has been shown to exhibit a phase transition in its ground state behavior under variation of a system parameter: the graph's mean vertex degree. This phase transition is often associated with the instances of highest complexity. We use extremal optimization to measure the ground state cost and the ``backbone'', an order parameter related to ground state overlap, averaged over a large number of instances near the transition for random graphs of size nn up to 512. For graphs up to this size, benchmarks show that extremal optimization reaches ground states and explores a sufficient number of them to give the correct backbone value after about O(n3.5)O(n^{3.5}) update steps. Finite size scaling gives a critical mean degree value Ī±c=4.703(28)\alpha_{\rm c}=4.703(28). Furthermore, the exploration of the degenerate ground states indicates that the backbone order parameter, measuring the constrainedness of the problem, exhibits a first-order phase transition.Comment: RevTex4, 8 pages, 4 postscript figures, related information available at http://www.physics.emory.edu/faculty/boettcher

    KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval

    Full text link
    We study the ability of state-of-the art models to answer constraint satisfaction queries for information retrieval (e.g., 'a list of ice cream shops in San Diego'). In the past, such queries were considered to be tasks that could only be solved via web-search or knowledge bases. More recently, large language models (LLMs) have demonstrated initial emergent abilities in this task. However, many current retrieval benchmarks are either saturated or do not measure constraint satisfaction. Motivated by rising concerns around factual incorrectness and hallucinations of LLMs, we present KITAB, a new dataset for measuring constraint satisfaction abilities of language models. KITAB consists of book-related data across more than 600 authors and 13,000 queries, and also offers an associated dynamic data collection and constraint verification approach for acquiring similar test data for other authors. Our extended experiments on GPT4 and GPT3.5 characterize and decouple common failure modes across dimensions such as information popularity, constraint types, and context availability. Results show that in the absence of context, models exhibit severe limitations as measured by irrelevant information, factual errors, and incompleteness, many of which exacerbate as information popularity decreases. While context availability mitigates irrelevant information, it is not helpful for satisfying constraints, identifying fundamental barriers to constraint satisfaction. We open source our contributions to foster further research on improving constraint satisfaction abilities of future models.Comment: 23 page

    Unit propagation with stable watches

    Get PDF
    Unit propagation is the hottest path in CDCL SAT solvers, therefore the related data-structures, algorithms and implementation details are well studied and highly optimized. State-of-the-art implementations are based on reduced occurrence tracking with two watched literals per clause and one blocking literal per watcher in order to further reduce the number of clause accesses. In this paper, we show that using runtime statistics for watched literal selection can improve the performance of state-of-the-art SAT solvers. We present a method for efficiently keeping track of spans during which literals are satisfied and using this statistic to improve watcher selection. An implementation of our method in the SAT solver CaDiCaL can solve more instances of the SAT Competition 2019 and 2020 benchmark sets and is specifically strong on satisfiable cryptographic instances
    • ā€¦
    corecore