1,653 research outputs found

    Stochastic make-to-stock inventory deployment problem: an endosymbiotic psychoclonal algorithm based approach

    Get PDF
    Integrated steel manufacturers (ISMs) have no specific product, they just produce finished product from the ore. This enhances the uncertainty prevailing in the ISM regarding the nature of the finished product and significant demand by customers. At present low cost mini-mills are giving firm competition to ISMs in terms of cost, and this has compelled the ISM industry to target customers who want exotic products and faster reliable deliveries. To meet this objective, ISMs are exploring the option of satisfying part of their demand by converting strategically placed products, this helps in increasing the variability of product produced by the ISM in a short lead time. In this paper the authors have proposed a new hybrid evolutionary algorithm named endosymbiotic-psychoclonal (ESPC) to decide what and how much to stock as a semi-product in inventory. In the proposed theory, the ability of previously proposed psychoclonal algorithms to exploit the search space has been increased by making antibodies and antigen more co-operative interacting species. The efficacy of the proposed algorithm has been tested on randomly generated datasets and the results compared with other evolutionary algorithms such as genetic algorithms (GA) and simulated annealing (SA). The comparison of ESPC with GA and SA proves the superiority of the proposed algorithm both in terms of quality of the solution obtained and convergence time required to reach the optimal/near optimal value of the solution

    Decentralized algorithm of dynamic task allocation for a swarm of homogeneous robots

    Get PDF
    The current trends in the robotics field have led to the development of large-scale swarm robot systems, which are deployed for complex missions. The robots in these systems must communicate and interact with each other and with their environment for complex task processing. A major problem for this trend is the poor task planning mechanism, which includes both task decomposition and task allocation. Task allocation means to distribute and schedule a set of tasks to be accomplished by a group of robots to minimize the cost while satisfying operational constraints. Task allocation mechanism must be run by each robot, which integrates the swarm whenever it senses a change in the environment to make sure the robot is assigned to the most appropriate task, if not, the robot should reassign itself to its nearest task. The main contribution in this thesis is to maximize the overall efficiency of the system by minimizing the total time needed to accomplish the dynamic task allocation problem. The near-optimal allocation schemes are found using a novel hybrid decentralized algorithm for a dynamic task allocation in a swarm of homogeneous robots, where the number of the tasks is more than the robots present in the system. This hybrid approach is based on both the Simulated Annealing (SA) optimization technique combined with the Discrete Particle Swarm Optimization (DPSO) technique. Also, another major contribution in this thesis is the formulation of the dynamic task allocation equations for the homogeneous swarm robotics using integer linear programming and the cost function and constraints are introduced for the given problem. Then, the DPSO and SA algorithms are developed to accomplish the task in a minimal time. Simulation is implemented using only two test cases via MATLAB. Simulation results show that PSO exhibits a smaller and more stable convergence characteristics and SA technique owns a better quality solution. Then, after developing the hybrid algorithm, which combines SA with PSO, simulation instances are extended to include fifteen more test cases with different swarm dimensions to ensure the robustness and scalability of the proposed algorithm over the traditional PSO and SA optimization techniques. Based on the simulation results, the hybrid DPSO/SA approach proves to have a higher efficiency in both small and large swarm sizes than the other traditional algorithms such as Particle Swarm Optimization technique and Simulated Annealing technique. The simulation results also demonstrate that the proposed approach can dislodge a state from a local minimum and guide it to the global minimum. Thus, the contributions of the proposed hybrid DPSO/SA algorithm involve possessing both the pros of high quality solution in SA and the fast convergence time capability in PSO. Also, a parameters\u27 selection process for the hybrid algorithm is proposed as a further contribution in an attempt to enhance the algorithm efficiency because the heuristic optimization techniques are very sensitive to any parameter changes. In addition, Verification is performed to ensure the effectiveness of the proposed algorithm by comparing it with results of an exact solver in terms of computational time, number of iterations and quality of solution. The exact solver that is used in this research is the Hungarian algorithm. This comparison shows that the proposed algorithm gives a superior performance in almost all swarm sizes with both stable and small execution time. However, it also shows that the proposed hybrid algorithm\u27s cost values which is the distance traveled by the robots to perform the tasks are larger than the cost values of the Hungarian algorithm but the execution time of the hybrid algorithm is much better. Finally, one last contribution in this thesis is that the proposed algorithm is implemented and extensively tested in a real experiment using a swarm of 4 robots. The robots that are used in the real experiment called Elisa-III robots

    A Micro Power Hardware Fabric for Embedded Computing

    Get PDF
    Field Programmable Gate Arrays (FPGAs) mitigate many of the problemsencountered with the development of ASICs by offering flexibility, faster time-to-market, and amortized NRE costs, among other benefits. While FPGAs are increasingly being used for complex computational applications such as signal and image processing, networking, and cryptology, they are far from ideal for these tasks due to relatively high power consumption and silicon usage overheads compared to direct ASIC implementation. A reconfigurable device that exhibits ASIC-like power characteristics and FPGA-like costs and tool support is desirable to fill this void. In this research, a parameterized, reconfigurable fabric model named as domain specific fabric (DSF) is developed that exhibits ASIC-like power characteristics for Digital Signal Processing (DSP) style applications. Using this model, the impact of varying different design parameters on power and performance has been studied. Different optimization techniques like local search and simulated annealing are used to determine the appropriate interconnect for a specific set of applications. A design space exploration tool has been developed to automate and generate a tailored architectural instance of the fabric.The fabric has been synthesized on 160 nm cell-based ASIC fabrication process from OKI and 130 nm from IBM. A detailed power-performance analysis has been completed using signal and image processing benchmarks from the MediaBench benchmark suite and elsewhere with comparisons to other hardware and software implementations. The optimized fabric implemented using the 130 nm process yields energy within 3X of a direct ASIC implementation, 330X better than a Virtex-II Pro FPGA and 2016X better than an Intel XScale processor

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Probabilistic Image Models and their Massively Parallel Architectures : A Seamless Simulation- and VLSI Design-Framework Approach

    Get PDF
    Algorithmic robustness in real-world scenarios and real-time processing capabilities are the two essential and at the same time contradictory requirements modern image-processing systems have to fulfill to go significantly beyond state-of-the-art systems. Without suitable image processing and analysis systems at hand, which comply with the before mentioned contradictory requirements, solutions and devices for the application scenarios of the next generation will not become reality. This issue would eventually lead to a serious restraint of innovation for various branches of industry. This thesis presents a coherent approach to the above mentioned problem. The thesis at first describes a massively parallel architecture template and secondly a seamless simulation- and semiconductor-technology-independent design framework for a class of probabilistic image models, which are formulated on a regular Markovian processing grid. The architecture template is composed of different building blocks, which are rigorously derived from Markov Random Field theory with respect to the constraints of \it massively parallel processing \rm and \it technology independence\rm. This systematic derivation procedure leads to many benefits: it decouples the architecture characteristics from constraints of one specific semiconductor technology; it guarantees that the derived massively parallel architecture is in conformity with theory; and it finally guarantees that the derived architecture will be suitable for VLSI implementations. The simulation-framework addresses the unique hardware-relevant simulation needs of MRF based processing architectures. Furthermore the framework ensures a qualified representation for simulation of the image models and their massively parallel architectures by means of their specific simulation modules. This allows for systematic studies with respect to the combination of numerical, architectural, timing and massively parallel processing constraints to disclose novel insights into MRF models and their hardware architectures. The design-framework rests upon a graph theoretical approach, which offers unique capabilities to fulfill the VLSI demands of massively parallel MRF architectures: the semiconductor technology independence guarantees a technology uncommitted architecture for several design steps without restricting the design space too early; the design entry by means of behavioral descriptions allows for a functional representation without determining the architecture at the outset; and the topology-synthesis simplifies and separates the data- and control-path synthesis. Detailed results discussed in the particular chapters together with several additional results collected in the appendix will further substantiate the claims made in this thesis
    corecore