323 research outputs found

    WeblabSim

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, June 2005.Includes bibliographical references (p. 149-157).In the field of microelectronics, a device simulator is an important engineering tool with tremendous educational value. With a device simulator, a student can examine the characteristics of a microelectronic device described by a particular model. This makes it easier to develop an intuition for the general behavior of that device and examine the impact of particular device parameters on device characteristics. In this thesis, we designed and implemented the MIT Device Simulation WebLab ("WeblabSim"), an online simulator for exploring the behavior of microelectronic devices. WeblabSim makes a device simulator readily available to users on the web anywhere, and at any time. Through a Java applet interface, a user connected to the Internet specifies and submits a simulation to the system. A program performs the simulation on a computer that can be located anywhere else on the Internet. The results are then sent back to the user's applet for graphing and further analysis. The WeblabSim system uses a three-tier design based on the iLab Batched Experiment Architecture. It consists of a client applet that lets users configure simulations, a laboratory server that runs them, and a generic service broker that mediates between the two through SOAP-based web services. We have implemented a graphical client applet, based on the client used by the MIT Microelectronics WebLab.(cont.) Our laboratory server has a distributed, modular design consisting of a data store, several worker servers that run simulations, and a master server that acts as a coordinator. On this system, we have successfully deployed WinSpice, a circuit simulator based on Berkeley Spice3F4. Our initial experiences with WeblabSim indicate that it is feature-complete, reliable and efficient. We are satisfied that it is ready for beta deployment in a classroom setting, which we hope to do in Fall 2004.by Adrian Solis.M.Eng

    A Taxonomy for and Analysis of Anonymous Communications Networks

    Get PDF
    Any entity operating in cyberspace is susceptible to debilitating attacks. With cyber attacks intended to gather intelligence and disrupt communications rapidly replacing the threat of conventional and nuclear attacks, a new age of warfare is at hand. In 2003, the United States acknowledged that the speed and anonymity of cyber attacks makes distinguishing among the actions of terrorists, criminals, and nation states difficult. Even President Obamaā€™s Cybersecurity Chief-elect recognizes the challenge of increasingly sophisticated cyber attacks. Now through April 2009, the White House is reviewing federal cyber initiatives to protect US citizen privacy rights. Indeed, the rising quantity and ubiquity of new surveillance technologies in cyberspace enables instant, undetectable, and unsolicited information collection about entities. Hence, anonymity and privacy are becoming increasingly important issues. Anonymization enables entities to protect their data and systems from a diverse set of cyber attacks and preserves privacy. This research provides a systematic analysis of anonymity degradation, preservation and elimination in cyberspace to enhance the security of information assets. This includes discovery/obfuscation of identities and actions of/from potential adversaries. First, novel taxonomies are developed for classifying and comparing well-established anonymous networking protocols. These expand the classical definition of anonymity and capture the peer-to-peer and mobile ad hoc anonymous protocol family relationships. Second, a unique synthesis of state-of-the-art anonymity metrics is provided. This significantly aids an entityā€™s ability to reliably measure changing anonymity levels; thereby, increasing their ability to defend against cyber attacks. Finally, a novel epistemic-based mathematical model is created to characterize how an adversary reasons with knowledge to degrade anonymity. This offers multiple anonymity property representations and well-defined logical proofs to ensure the accuracy and correctness of current and future anonymous network protocol design

    Automated synthesis of delay-insensitive circuits

    Get PDF

    Book of Abstracts of the Sixth SIAM Workshop on Combinatorial Scientific Computing

    Get PDF
    Book of Abstracts of CSC14 edited by Bora UƧarInternational audienceThe Sixth SIAM Workshop on Combinatorial Scientific Computing, CSC14, was organized at the Ecole Normale SupƩrieure de Lyon, France on 21st to 23rd July, 2014. This two and a half day event marked the sixth in a series that started ten years ago in San Francisco, USA. The CSC14 Workshop's focus was on combinatorial mathematics and algorithms in high performance computing, broadly interpreted. The workshop featured three invited talks, 27 contributed talks and eight poster presentations. All three invited talks were focused on two interesting fields of research specifically: randomized algorithms for numerical linear algebra and network analysis. The contributed talks and the posters targeted modeling, analysis, bisection, clustering, and partitioning of graphs, applied in the context of networks, sparse matrix factorizations, iterative solvers, fast multi-pole methods, automatic differentiation, high-performance computing, and linear programming. The workshop was held at the premises of the LIP laboratory of ENS Lyon and was generously supported by the LABEX MILYON (ANR-10-LABX-0070, UniversitƩ de Lyon, within the program ''Investissements d'Avenir'' ANR-11-IDEX-0007 operated by the French National Research Agency), and by SIAM

    Safety Assurance in Interlocking Design

    Get PDF
    This thesis takes a pedagogical stance in demonstrating how results from theoretical computer science may be applied to yield significant insight into the behaviour of the devices computer systems engineering practice seeks to put in place, and that this is immediately attainable with the present state of the art. The focus for this detailed study is provided by the type of solid state signalling systems currently being deployed throughout mainline British railways. Safety and system reliability concerns dominate in this domain. With such motivation, two issues are tackled: the special problem of software quality assurance in these data-driven control systems, and the broader problem of design dependability. In the former case, the analysis is directed towards proving safety properties of the geographic data which encode the control logic for the railway interlocking; the latter examines the fidelity of the communication protocols upon which the distributed control system depends. The starting point for both avenues of attack is a mathematical model of the interlocking logic that is derived by interpreting the geographic data in process algebra. Thus, the emphasis is on the semantics of the programming language in question, and the kinds of safety properties which can be expressed as invariants of the system's ongoing behaviour. Although the model so derived turns out to be too concrete to be effectual in program verification in general, a careful analysis of the safety proof reveals a simple co-induction argument that leads to a highly efficient proof methodology. From this understanding it is straightforward to mechanise the safety arguments, and a prototype verification system is realised in higher-order logic which uses the proof tactics of the theorem prover to achieve full automation. The other line of inquiry considers whether the integrity of the overall design that coordinates the activities of many concurrent control elements can be compromised. Therefore, the formal model is developed to specifically answer safety-related concerns about the protocol employed to achieve distributed control in the management of larger railway networks. The exercise reveals that moderately serious design flaws do exist, but the real value of the mathematical model is twofold: it makes explicit one's assumptions about the conditions under which the faults can and cannot be activated, and it provides a framework in which to prove a simple modification to the design recovers complete security at negligible cost to performance

    Algorithm to layout (ATL) systems for VLSI design

    Get PDF
    PhD ThesisThe complexities involved in custom VLSI design together with the failure of CAD techniques to keep pace with advances in the fabrication technology have resulted in a design bottleneck. Powerful tools are required to exploit the processing potential offered by the densities now available. Describing a system in a high level algorithmic notation makes writing, understanding, modification, and verification of a design description easier. It also removes some of the emphasis on the physical issues of VLSI design, and focus attention on formulating a correct and well structured design. This thesis examines how current trends in CAD techniques might influence the evolution of advanced Algorithm To Layout (ATL) systems. The envisaged features of an example system are specified. Particular attention is given to the implementation of one its features COPTS (Compilation Of Occam Programs To Schematics). COPTS is capable of generating schematic diagrams from which an actual layout can be derived. It takes a description written in a subset of Occam and generates a high level schematic diagram depicting its realisation as a VLSI system. This diagram provides the designer with feedback on the relative placement and interconnection of the operators used in the source code. It also gives a visual representation of the parallelism defined in the Occam description. Such diagrams are a valuable aid in documenting the implementation of a design. Occam has also been selected as the input to the design system that COPTS is a feature of. The choice of Occam was made on the assumption that the most appropriate algorithmic notation for such a design system will be a suitable high level programming language. This is in contrast to current automated VLSI design systems, which typically use a hardware des~ription language for input. These special purpose languages currently concentrate on handling structural/behavioural information and have limited ability to express algorithms. Using a language such as Occam allows a designer to write a behavioural description which can be compiled and executed as a simulator, or prototype, of the system. The programmability introduced into the design process enables designers to concentrate on a design's underlying algorithm. The choice of this algorithm is the most crucial decision since it determines the performance and area of the silicon implementation. The thesis is divided into four sections, each of several chapters. The first section considers VLSI design complexity, compares the expert systems and silicon compilation approaches to tackling it, and examines its parallels with software complexity. The second section reviews the advantages of using a conventional programming language for VLSI system descriptions. A number of alternative high level programming languages are considered for application in VLSI design. The third section defines the overall ATL system COPTS is envisaged to be part of, and considers the schematic representation of Occam programs. The final section presents a summary of the overall project and suggestions for future work on realising the full ATL system

    PiCo: A Domain-Specific Language for Data Analytics Pipelines

    Get PDF
    In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution modelsā€”for which only informal (and often confusing) semantics is generally providedā€”all share a common under- lying model, namely, the Dataflow model. Using this model as a starting point, it is possible to categorize and analyze almost all aspects about Big Data analytics tools from a high level perspective. This analysis can be considered as a first step toward a formal model to be exploited in the design of a (new) framework for Big Data analytics. By putting clear separations between all levels of abstraction (i.e., from the runtime to the user API), it is easier for a programmer or software designer to avoid mixing low level with high level aspects, as we are often used to see in state-of-the-art Big Data analytics frameworks. From the user-level perspective, we think that a clearer and simple semantics is preferable, together with a strong separation of concerns. For this reason, we use the Dataflow model as a starting point to build a programming environment with a simplified programming model implemented as a Domain-Specific Language, that is on top of a stack of layers that build a prototypical framework for Big Data analytics. The contribution of this thesis is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm, Google Dataflow), thus making it easier to understand high-level data-processing applications written in such frameworks. As result of this analysis, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level. Second, we propose a programming environment based on such layered model in the form of a Domain-Specific Language (DSL) for processing data collections, called PiCo (Pipeline Composition). The main entity of this programming model is the Pipeline, basically a DAG-composition of processing elements. This model is intended to give the user an unique interface for both stream and batch processing, hiding completely data management and focusing only on operations, which are represented by Pipeline stages. Our DSL will be built on top of the FastFlow library, exploiting both shared and distributed parallelism, and implemented in C++11/14 with the aim of porting C++ into the Big Data world

    Timing Closure in Chip Design

    Get PDF
    Achieving timing closure is a major challenge to the physical design of a computer chip. Its task is to find a physical realization fulfilling the speed specifications. In this thesis, we propose new algorithms for the key tasks of performance optimization, namely repeater tree construction; circuit sizing; clock skew scheduling; threshold voltage optimization and plane assignment. Furthermore, a new program flow for timing closure is developed that integrates these algorithms with placement and clocktree construction. For repeater tree construction a new algorithm for computing topologies, which are later filled with repeaters, is presented. To this end, we propose a new delay model for topologies that not only accounts for the path lengths, as existing approaches do, but also for the number of bifurcations on a path, which introduce extra capacitance and thereby delay. In the extreme cases of pure power optimization and pure delay optimization the optimum topologies regarding our delay model are minimum Steiner trees and alphabetic code trees with the shortest possible path lengths. We presented a new, extremely fast algorithm that scales seamlessly between the two opposite objectives. For special cases, we prove the optimality of our algorithm. The efficiency and effectiveness in practice is demonstrated by comprehensive experimental results. The task of circuit sizing is to assign millions of small elementary logic circuits to elements from a discrete set of logically equivalent, predefined physical layouts such that power consumption is minimized and all signal paths are sufficiently fast. In this thesis we develop a fast heuristic approach for global circuit sizing, followed by a local search into a local optimum. Our algorithms use, in contrast to existing approaches, the available discrete layout choices and accurate delay models with slew propagation. The global approach iteratively assigns slew targets to all source pins of the chip and chooses a discrete layout of minimum size preserving the slew targets. In comprehensive experiments on real instances, we demonstrate that the worst path delay is within 7% of its lower bound on average after a few iterations. The subsequent local search reduces this gap to 2% on average. Combining global and local sizing we are able to size more than 5.7 million circuits within 3 hours. For the clock skew scheduling problem we develop the first algorithm with a strongly polynomial running time for the cycle time minimization in the presence of different cycle times and multi-cycle paths. In practice, an iterative local search method is much more efficient. We prove that this iterative method maximizes the worst slack, even when restricting the feasible schedule to certain time intervals. Furthermore, we enhance the iterative local approach to determine a lexicographically optimum slack distribution. The clock skew scheduling problem is then generalized to allow for simultaneous data path optimization. In fact, this is a time-cost tradeoff problem. We developed the first combinatorial algorithm for computing time-cost tradeoff curves in graphs that may contain cycles. Starting from the lowest-cost solution, the algorithm iteratively computes a descent direction by a minimum cost flow computation. The maximum feasible step length is then determined by a minimum ratio cycle computation. This approach can be used in chip design for several optimization tasks, e.g. threshold voltage optimization or plane assignment. Finally, the optimization routines are combined into a timing closure flow. Here, the global placement is alternated with global performance optimization. Netweights are used to penalize the length of critical nets during placement. After the global phase, the performance is improved further by applying more comprehensive optimization routines on the most critical paths. In the end, the clock schedule is optimized and clocktrees are inserted. Computational results of the design flow are obtained on real-world computer chips

    Intelligent cell memory system for real time engineering applications

    Get PDF
    • ā€¦
    corecore