521,374 research outputs found

    A fast, effective local search for scheduling independent jobs in heterogeneous computing environments

    Get PDF
    The efficient scheduling of independent computational jobs in a heterogeneous computing (HC) environment is an important problem in domains such as grid computing. Finding optimal schedules for such an environment is (in general) an NP-hard problem, and so heuristic approaches must be used. Work with other NP-hard problems has shown that solutions found by heuristic algorithms can often be improved by applying local search procedures to the solution found. This paper describes a simple but effective local search procedure for scheduling independent jobs in HC environments which, when combined with fast construction heuristics, can find shorter schedules on benchmark problems than other solution techniques found in the literature, and in significantly less time

    Minimizing energy below the glass thresholds

    Full text link
    Focusing on the optimization version of the random K-satisfiability problem, the MAX-K-SAT problem, we study the performance of the finite energy version of the Survey Propagation (SP) algorithm. We show that a simple (linear time) backtrack decimation strategy is sufficient to reach configurations well below the lower bound for the dynamic threshold energy and very close to the analytic prediction for the optimal ground states. A comparative numerical study on one of the most efficient local search procedures is also given.Comment: 12 pages, submitted to Phys. Rev. E, accepted for publicatio

    Mathematic modeling of the Earth's surface and the process of remote sensing

    Get PDF
    It is shown that real data from remote sensing of the Earth from outer space are not best suited to the search for optimal procedures with which to process such data. To work out the procedures, it was proposed that data synthesized with the help of mathematical modeling be used. A criterion for simularity to reality was formulated. The basic principles for constructing methods for modeling the data from remote sensing are recommended. A concrete method is formulated for modeling a complete cycle of radiation transformations in remote sensing. A computer program is described which realizes the proposed method. Some results from calculations are presented which show that the method satisfies the requirements imposed on it

    Two-stage hybrid feature selection algorithms for diagnosing erythemato-squamous diseases

    Get PDF
    This paper proposes two-stage hybrid feature selection algorithms to build the stable and efficient diagnostic models where a new accuracy measure is introduced to assess the models. The two-stage hybrid algorithms adopt Support Vector Machines (SVM) as a classification tool, and the extended Sequential Forward Search (SFS), Sequential Forward Floating Search (SFFS), and Sequential Backward Floating Search (SBFS), respectively, as search strategies, and the generalized F-score (GF) to evaluate the importance of each feature. The new accuracy measure is used as the criterion to evaluated the performance of a temporary SVM to direct the feature selection algorithms. These hybrid methods combine the advantages of filters and wrappers to select the optimal feature subset from the original feature set to build the stable and efficient classifiers. To get the stable, statistical and optimal classifiers, we conduct 10-fold cross validation experiments in the first stage; then we merge the 10 selected feature subsets of the 10-cross validation experiments, respectively, as the new full feature set to do feature selection in the second stage for each algorithm. We repeat the each hybrid feature selection algorithm in the second stage on the one fold that has got the best result in the first stage. Experimental results show that our proposed two-stage hybrid feature selection algorithms can construct efficient diagnostic models which have got better accuracy than that built by the corresponding hybrid feature selection algorithms without the second stage feature selection procedures. Furthermore our methods have got better classification accuracy when compared with the available algorithms for diagnosing erythemato-squamous diseases

    Beyond epistemic democracy: the identification and pooling of information by groups of political agents.

    Get PDF
    This thesis addresses the mechanisms by which groups of agents can track the truth, particularly in political situations. I argue that the mechanisms which allow groups of agents to track the truth operate in two stages: firstly, there are search procedures; and secondly, there are aggregation procedures. Search procedures and aggregation procedures work in concert. The search procedures allow agents to extract information from the environment. At the conclusion of a search procedure the information will be dispersed among different agents in the group. Aggregation procedures, such as majority rule, expert dictatorship and negative reliability unanimity rule, then pool these pieces of information into a social choice. The institutional features of both search procedures and aggregation procedures account for the ability of groups to track the truth and amount to social epistemic mechanisms. Large numbers of agents are crucial for the epistemic capacities of both search procedures and aggregation procedures. This thesis makes two main contributions to the literature on social epistemology and epistemic democracy. Firstly, most current accounts focus on the Condorcet Jury Theorem and its extensions as the relevant epistemic mechanism that can operate in groups of political agents. The introduction of search procedures to epistemic democracy is (mostly) new. Secondly, the thesis introduces a two-stage framework to the process of group truth-tracking. In 4 addition to showing how the two procedures of search and aggregation can operate in concert, the framework highlights the complexity of social choice situations. Careful consideration of different types of social choice situation shows that different aggregation procedures will be optimal truth-trackers in different situations. Importantly, there will be some situations in which aggregation procedures other than majority rule will be best at tracking the truth

    A simple uniformly optimal method without line search for convex optimization

    Full text link
    Line search (or backtracking) procedures have been widely employed into first-order methods for solving convex optimization problems, especially those with unknown problem parameters (e.g., Lipschitz constant). In this paper, we show that line search is superfluous in attaining the optimal rate of convergence for solving a convex optimization problem whose parameters are not given a priori. In particular, we present a novel accelerated gradient descent type algorithm called auto-conditioned fast gradient method (AC-FGM) that can achieve an optimal O(1/k2)\mathcal{O}(1/k^2) rate of convergence for smooth convex optimization without requiring the estimate of a global Lipschitz constant or the employment of line search procedures. We then extend AC-FGM to solve convex optimization problems with H\"{o}lder continuous gradients and show that it automatically achieves the optimal rates of convergence uniformly for all problem classes with the desired accuracy of the solution as the only input. Finally, we report some encouraging numerical results that demonstrate the advantages of AC-FGM over the previously developed parameter-free methods for convex optimization
    • …
    corecore