222 research outputs found

    On the Use of Directed Moves for Placement in VLSI CAD

    Get PDF
    Search-based placement methods have long been used for placing integrated circuits targeting the field programmable gate array (FPGA) and standard cell design styles. Such methods offer the potential for high-quality solutions but often come at the cost of long run-times compared to alternative methods. This dissertation examines strategies for enhancing local search heuristics---and in particular, simulated annealing---through the application of directed moves. These moves help to guide a search-based optimizer by focusing efforts on states which are most likely to yield productive improvement, effectively pruning the size of the search space. The engineering theory and implementation details of directed moves are discussed in the context of both field programmable gate array and standard cell designs. This work explores the ways in which such moves can be used to improve the quality of FPGA placements, improve the robustness of floorplan repair and legalization methods for mixed-size standard cell designs, and enhance the quality of detailed placement for standard cell circuits. The analysis presented herein confirms the validity and efficacy of directed moves, and supports the use of such heuristics within various optimization frameworks

    Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept

    Get PDF
    This Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept document was developed as a first step in developing the Component-Level Electronic-Assembly Repair (CLEAR) System Architecture (NASA/TM-2011-216956). The CLEAR operational concept defines how the system will be used by the Constellation Program and what needs it meets. The document creates scenarios for major elements of the CLEAR architecture. These scenarios are generic enough to apply to near-Earth, Moon, and Mars missions. The CLEAR operational concept involves basic assumptions about the overall program architecture and interactions with the CLEAR system architecture. The assumptions include spacecraft and operational constraints for near-Earth orbit, Moon, and Mars missions. This document addresses an incremental development strategy where capabilities evolve over time, but it is structured to prevent obsolescence. The approach minimizes flight hardware by exploiting Internet-like telecommunications that enables CLEAR capabilities to remain on Earth and to be uplinked as needed. To minimize crew time and operational cost, CLEAR exploits offline development and validation to support online teleoperations. Operational concept scenarios are developed for diagnostics, repair, and functional test operations. Many of the supporting functions defined in these operational scenarios are further defined as technologies in NASA/TM-2011-216956

    Edge Video Analytics: A Survey on Applications, Systems and Enabling Techniques

    Full text link
    Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. The basic concepts of EVA (e.g., definition, architectures) were not fully elucidated due to the rapid development of this domain. To fill these gaps, we provide a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.Comment: 31 pages, 13 figure

    Strategic cost management in a global supply chain

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.Includes bibliographical references (p. 100).In the face of an economic downturn, cost has become a focal point of supply chain management. Cost management is increasingly being recognized as a vital core competency needed for survival. As companies transition from being vertically integrated to pursuing increasingly outsourced manufacturing strategies, modeling and monitoring the total cost of manufacturing products has become crucial, and complicated. In the context of the automated test equipment industry, this thesis explores the impact of outsourcing on product cost and cost management practices. It examines prevailing cost management practices with reference to design and procurement, as well as methods to leverage information technology and re-engineer business processes to manage "spend" effectively and efficiently. It surveys capabilities that are available through software and examines cost-benefit tradeoffs that have to be addressed in selecting such systems.by Venkatesh G. Rao.S.M.M.B.A

    A Company-led Methodology for the Specification of Product Design Capabilities in Small and Medium Sized Electronics Companies

    Get PDF
    It is the aim of the research reported in this thesis to improve the product design effectiveness of small and medium sized electronics companies in the United Kingdom. It does so by presenting a methodology for use by such firms which will enable them to specify product design capabilities which are resilient to changes in their respective business environments. The research has not, however, concerned itself with the details of particular electronics component technologies or with the advantages of various CAD or CAE products, although these are both important aspects of any design capability. Nor is it concerned with the implementation of the product design capability. The methodology, which represents a significant improvement on current practice, is a structured, company-driven approach which draws extensively upon the lessons of international design best practice. It uses well-proven tools and techniques to guide firms through the entire process of creating such capabilities - from the development of an appropriate Mission Statement to the identification of cost effective and appropriate design system solutions which can readily be translated into action plans for improvement. The work emphasises the importance of adopting a holistic, systems approach which acknowledges the interrelationship between the management of the design process, as well as its operational and supporting activities. The research has been structured around the experiences of companies which have implemented electronics design systems and which "own" the problem in question. Hence, a research strategy was adopted which was based upon a case study approach and upon the development of close collaborative links with two leading design automation tool vendor companies. Case study interviews were undertaken in 18 U.K. and European electronics companies and in 11 U.S., Japanese and Korean electronics firms. The work proceeded in two distinct phases. Firstly, the author participated with other researchers to jointly develop a functional specification of an electronics designers' toolset to support the process of product design in an integrated manufacturing environment. The first phase provided the context for Phase 2, the development of the AGILITY methodology for specifying product design capabilities which represents the author's individual contribution. The contribution to knowledge made by the research lies in the creation of a process methodology which, for the first time, will help U.K. electronics companies to define for themselves product design capabilities which are robust and which support their wider business objectives. No such methodology is currently available in a form which is both accessible and affordable to smaller firms. Furthermore, the author has uncovered no evidence of the existence of such a methodology even for use by large electronics firms. Validation of the methodology is subject to an ongoing process of feedback.Racal Redac Lt

    TinyML: Tools, Applications, Challenges, and Future Research Directions

    Full text link
    In recent years, Artificial Intelligence (AI) and Machine learning (ML) have gained significant interest from both, industry and academia. Notably, conventional ML techniques require enormous amounts of power to meet the desired accuracy, which has limited their use mainly to high-capability devices such as network nodes. However, with many advancements in technologies such as the Internet of Things (IoT) and edge computing, it is desirable to incorporate ML techniques into resource-constrained embedded devices for distributed and ubiquitous intelligence. This has motivated the emergence of the TinyML paradigm which is an embedded ML technique that enables ML applications on multiple cheap, resource- and power-constrained devices. However, during this transition towards appropriate implementation of the TinyML technology, multiple challenges such as processing capacity optimization, improved reliability, and maintenance of learning models' accuracy require timely solutions. In this article, various avenues available for TinyML implementation are reviewed. Firstly, a background of TinyML is provided, followed by detailed discussions on various tools supporting TinyML. Then, state-of-art applications of TinyML using advanced technologies are detailed. Lastly, various research challenges and future directions are identified.Comment: 12 pags, 3 tables, 4 figure

    A custom computing framework for orientation and photogrammetry

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 211-223).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.There is great demand today for real-time computer vision systems, with applications including image enhancement, target detection and surveillance, autonomous navigation, and scene reconstruction. These operations generally require extensive computing power; when multiple conventional processors and custom gate arrays are inappropriate, due to either excessive cost or risk, a class of devices known as Field-Programmable Gate Arrays (FPGAs) can be employed. FPGAs per the flexibility of a programmable solution and nearly the performance of a custom gate array. When implementing a custom algorithm in an FPGA, one must be more efficient than with a gate array technology. By tailoring the algorithms, architectures, and precisions, the gate count of an algorithm may be sufficiently reduced to t into an FPGA. The challenge is to perform this customization of the algorithm, while still maintaining the required performance. The techniques required to perform algorithmic optimization for FPGAs are scattered across many fields; what is currently lacking is a framework for utilizing all these well known and developing techniques. The purpose of this thesis is to develop this framework for orientation and photogrammetry systems.by Paul D. Fiore.Ph.D

    Techniques of EMG signal analysis: detection, processing, classification and applications

    Get PDF
    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications

    Working With Incremental Spatial Data During Parallel (GPU) Computation

    Get PDF
    Central to many complex systems, spatial actors require an awareness of their local environment to enable behaviours such as communication and navigation. Complex system simulations represent this behaviour with Fixed Radius Near Neighbours (FRNN) search. This algorithm allows actors to store data at spatial locations and then query the data structure to find all data stored within a fixed radius of the search origin. The work within this thesis answers the question: What techniques can be used for improving the performance of FRNN searches during complex system simulations on Graphics Processing Units (GPUs)? It is generally agreed that Uniform Spatial Partitioning (USP) is the most suitable data structure for providing FRNN search on GPUs. However, due to the architectural complexities of GPUs, the performance is constrained such that FRNN search remains one of the most expensive common stages between complex systems models. Existing innovations to USP highlight a need to take advantage of recent GPU advances, reducing the levels of divergence and limiting redundant memory accesses as viable routes to improve the performance of FRNN search. This thesis addresses these with three separate optimisations that can be used simultaneously. Experiments have assessed the impact of optimisations to the general case of FRNN search found within complex system simulations and demonstrated their impact in practice when applied to full complex system models. Results presented show the performance of the construction and query stages of FRNN search can be improved by over 2x and 1.3x respectively. These improvements allow complex system simulations to be executed faster, enabling increases in scale and model complexity
    corecore