393 research outputs found

    On the Average Complexity of Moore's State Minimization Algorithm

    Get PDF
    We prove that, for any arbitrary finite alphabet and for the uniform distribution over deterministic and accessible automata with n states, the average complexity of Moore's state minimization algorithm is in O(n log n). Moreover this bound is tight in the case of unary utomata

    Neural Network Exploration Using Optimal Experiment Design

    Get PDF
    We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated

    Balancing labor requirements in a manufacturing environment

    Get PDF
    “This research examines construction environments within manufacturing facilities, specifically semiconductor manufacturing facilities, and develops a new optimization method that is scalable for large construction projects with multiple execution modes and resource constraints. The model is developed to represent real-world conditions in which project activities do not have a fixed, prespecified duration but rather a total amount of work that is directly impacted by the level of resources assigned. To expand on the concept of resource driven project durations, this research aims to mimic manufacturing construction environments by allowing a non-continuous resource allocation to project tasks. This concept allows for resources to shift between projects in order to achieve the optimal result for the project manager. Our model generates a novel multi-objective resource constrained project scheduling problem. Specifically, two objectives are studied; the minimization of the total direct labor cost and the minimization of the resource leveling. This research will utilize multiple techniques to achieve resource leveling and discuss the advantage each one provides to the project team, as well as a comparison of the Pareto Fronts between the given resource leveling and cost minimization objective functions. Finally, a heuristic is developed utilizing partial linear relaxation to scale the optimization model for large scale projects. The computation results from multiple randomly generated case studies show that the new heuristic method is capable of generating high quality solutions at significantly less computational time”--Abstract, page iv

    Implementing Energy Parsimonious Circuits through Inexact Designs

    Get PDF
    Inexact Circuits or circuits in which accuracy of the output can be traded for cost (energy, delay and/or area) savings, have been receiving increasing attention of late due to invariable inaccuracies in nanometer-scale circuits and a concomitant growing desire for ultra low energy embedded systems. Most of the previous approaches to realize inexact circuits relied on scaling of circuit-level operational parameters (such as supply voltage) to achieve the cost and accuracy tradeoffs, and suffered from serious drawbacks of significant implementation overheads that drastically reduced the gains. In this thesis, two novel architecture-level approaches called Probabilisttc Pruning and Probabilistic Logic Minimization are proposed to realize inexact circuits with zero overhead. Extensive simulations on various architectures of datapath elements and a prototype chip fabrication demonstrate that normalized gains as large as 2X-9.5X in Energy-Delay-Area product can be obtained for relative error as low as 10 -6 % - 1% compared to corresponding conventional correct designs

    On the benefits of resource disaggregation for virtual data centre provisioning in optical data centres

    Get PDF
    Virtual Data Centre (VDC) allocation requires the provisioning of both computing and network resources. Their joint provisioning allows for an optimal utilization of the physical Data Centre (DC) infrastructure resources. However, traditional DCs can suffer from computing resource underutilization due to the rigid capacity configurations of the server units, resulting in high computing resource fragmentation across the DC servers. To overcome these limitations, the disaggregated DC paradigm has been recently introduced. Thanks to resource disaggregation, it is possible to allocate the exact amount of resources needed to provision a VDC instance. In this paper, we focus on the static planning of a shared optically interconnected disaggregated DC infrastructure to support a known set of VDC instances to be deployed on top. To this end, we provide optimal and sub-optimal techniques to determine the necessary capacity (both in terms of computing and network resources) required to support the expected set of VDC demands. Next, we quantitatively evaluate the benefits yielded by the disaggregated DC paradigm in front of traditional DC architectures, considering various VDC profiles and Data Centre Network (DCN) topologies.Peer ReviewedPostprint (author's final draft

    Multi-Output ESOP Synthesis with Cascades of New Reversible Gate Family

    Get PDF
    A reversible gate maps each output vector into a unique input vector and vice versa. The importance of reversible logic lies in the technological necessity that most near-future and all long-term future technologies will have to use reversible gates in order to reduce power. In this paper, a new generalized k*k reversible gate family is proposed. A synthesis method for multi-output (factorized) ESOP using cascades of the new gate family is presented. For utilizing the benefit of product sharing among the ESOPs, two graph-based data structures -connectivity tree and implementation graph are used. Experimental results with some MCNC benchmark functions show that the number of gates in the multioutput ESOP cascades is almost equal to the number of products in the multi-output ESOP. However, this cascaded realization of multi-output ESOP generates a large number of garbage outputs and requires a large number of input constants, which need to be reduced in the future research. This synthesis method is technology-independent and can be used in association with any known or future reversible technology

    Adaptive constrained clustering with application to dynamic image database categorization and visualization.

    Get PDF
    The advent of larger storage spaces, affordable digital capturing devices, and an ever growing online community dedicated to sharing images has created a great need for efficient analysis methods. In fact, analyzing images for the purpose of automatic categorization and retrieval is quickly becoming an overwhelming task even for the casual user. Initially, systems designed for these applications relied on contextual information associated with images. However, it was realized that this approach does not scale to very large data sets and can be subjective. Then researchers proposed methods relying on the content of the images. This approach has also proved to be limited due to the semantic gap between the low-level representation of the image and the high-level user perception. In this dissertation, we introduce a novel clustering technique that is designed to combine multiple forms of information in order to overcome the disadvantages observed while using a single information domain. Our proposed approach, called Adaptive Constrained Clustering (ACC), is a robust, dynamic, and semi-supervised algorithm. It is based on minimizing a single objective function incorporating the abilities to: (i) use multiple feature subsets while learning cluster independent feature relevance weights; (ii) search for the optimal number of clusters; and (iii) incorporate partial supervision in the form of pairwise constraints. The content of the images is used to extract the features used in the clustering process. The context information is used in constructing a set of appropriate constraints. These constraints are used as partial supervision information to guide the clustering process. The ACC algorithm is dynamic in the sense that the number of categories are allowed to expand and contract depending on the distribution of the data and the available set of constraints. We show that the proposed ACC algorithm is able to partition a given data set into meaningful clusters using an adaptive, soft constraint satisfaction methodology for the purpose of automatically categorizing and summarizing an image database. We show that the ACC algorithm has the ability to incorporate various types of contextual information. This contextual information includes: spatial information provided by geo-referenced images that include GPS coordinates pinpointing their location, temporal information provided by each image\u27s time stamp indicating the capture time, and textual information provided by a set of keywords describing the semantics of the associated images
    • …
    corecore