8 research outputs found

    High level compilation for gate reconfigurable architectures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 205-215).A continuing exponential increase in the number of programmable elements is turning management of gate-reconfigurable architectures as "glue logic" into an intractable problem; it is past time to raise this abstraction level. The physical hardware in gate-reconfigurable architectures is all low level - individual wires, bit-level functions, and single bit registers - hence one should look to the fetch-decode-execute machinery of traditional computers for higher level abstractions. Ordinary computers have machine-level architectural mechanisms that interpret instructions - instructions that are generated by a high-level compiler. Efficiently moving up to the next abstraction level requires leveraging these mechanisms without introducing the overhead of machine-level interpretation. In this dissertation, I solve this fundamental problem by specializing architectural mechanisms with respect to input programs. This solution is the key to efficient compilation of high-level programs to gate reconfigurable architectures. My approach to specialization includes several novel techniques. I develop, with others, extensive bitwidth analyses that apply to registers, pointers, and arrays. I use pointer analysis and memory disambiguation to target devices with blocks of embedded memory. My approach to memory parallelization generates a spatial hierarchy that enables easier-to-synthesize logic state machines with smaller circuits and no long wires.(cont.) My space-time scheduling approach integrates the techniques of high-level synthesis with the static routing concepts developed for single-chip multiprocessors. Using DeepC, a prototype compiler demonstrating my thesis, I compile a new benchmark suite to Xilinx Virtex FPGAs. Resulting performance is comparable to a custom MIPS processor, with smaller area (40 percent on average), higher evaluation speeds (2.4x), and lower energy (18x) and energy-delay (45x). Specialization of advanced mechanisms results in additional speedup, scaling with hardware area, at the expense of power. For comparison, I also target IBM's standard cell SA-27E process and the RAW microprocessor. Results include sensitivity analysis to the different mechanisms specialized and a grand comparison between alternate targets.by Jonathan William Babb.Ph.D

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Performance Optimizations for Software Transactional Memory

    Get PDF
    The transition from single-core processors to multi-core processors demands a change from sequential programming to concurrent programming for mainstream programmers. However, concurrent programming has long been widely recognized as being notoriously difficult. A major reason for its difficulty is that existing concurrent programming constructs provide low-level programming abstractions. Using these constructs forces programmers to consider many low level details. Locks, the dominant programming construct for mutual exclusion, suffer several well known problems, such as deadlock, priority inversion, and convoying, and are directly related to the difficulty of concurrent programming. The alternative to locks, i.e. non-blocking programming, not only is extremely error-prone, but also does not produce consistently good performance. Better programming constructs are critical to reduce the complexity of concurrent programming, increase productivity, and expose the computing power in multi-core processors. Transactional memory has emerged recently as one promising programming construct for supporting atomic operations on shared data. By eliminating the need to consider a huge number of possible interactions among concurrent transactions, Transactional memory greatly reduces the complexity of concurrent programming and vastly improves programming productivity. Software transactional memory systems implement a transactional memory abstraction in software. Unfortunately, existing designs of Software Transactional Memory systems incur significant performance overhead that could potentially prevent it from being widely used. Reducing STM's overhead will be critical for mainstream programmers to improve productivity while not suffering performance degradation. My thesis is that the performance of STM can be significantly improved by intelligently designing validation and commit protocols, by designing the time base, and by incorporating application-specific knowledge. I present four novel techniques for improving performance of STM systems to support my thesis. First, I propose a time-based STM system based on a runtime tuning strategy that is able to deliver performance equal to or better than existing strategies. Second, I present several novel commit phase designs and evaluate their performance. Then I propose a new STM programming interface extension that enables transaction optimizations using fast shared memory reads while maintaining transaction composability. Next, I present a distributed time base design that outperforms existing time base designs for certain types of STM applications. Finally, I propose a novel programming construct to support multi-place isolation. Experimental results show the techniques presented here can significantly improve the STM performance. We expect these techniques to help STM be accepted by more programmers

    Reports to the President

    Get PDF
    A compilation of annual reports for the 1999-2000 academic year, including a report from the President of the Massachusetts Institute of Technology, as well as reports from the academic and administrative units of the Institute. The reports outline the year's goals, accomplishments, honors and awards, and future plans

    Reports to the President

    Get PDF
    A compilation of annual reports for the 1989-1990 academic year, including a report from the President of the Massachusetts Institute of Technology, as well as reports from the academic and administrative units of the Institute. The reports outline the year's goals, accomplishments, honors and awards, and future plans

    Developing Learning System in Pesantren The Role of ICT

    Get PDF
    According to Krashen's affective filter hypothesis, students who are highly motivated have a strong sense of self, enter a learning context with a low level of anxiety, and are much more likely to become successful language acquirers than those who do not. Affective factors, such as motivation, attitude, and anxiety, have a direct impact on foreign language acquisition. Horwitz et al. (1986) mentioned that many language learners feel anxious when learning foreign languages. Thus, this study recruits 100 college students to fill out the Foreign Language Classroom Anxiety Scale (FLCAS) to investigate language learning anxiety. Then, this study designs and develops an affective tutoring system (ATS) to conduct an empirical study. The study aims to improve students’ learning interest by recognizing their emotional states during their learning processes and provide adequate feedback. It is expected to enhance learners' motivation and interest via affective instructional design and then improve their learning performance

    From cyber-utopia to cyber-war: normative change in cyberspace

    Get PDF
    This dissertation analyzes a normative change in state perception and political action towards the Internet. This change is currently reflected in certain measures aimed at the exercise of control and state sovereignty in and over cyberspace. These include phenomena such as the total surveillance of data streams and the extensive collection of connection data by secret services, the control (political censorship) and manipulation of information (information war) as well as the arms spiral around offensive cyber capabilities to disrupt and destroy information infrastructures. States face a loss of control that they want to compensate for. The phenomenon of the perceived loss of control and the establishment of a norm of control (filter and monitoring technology) is equally evident in various democratic and non-democratic states, as various studies show. This militarized perception of the Internet is remarkable in so far as Western politicians used to perceive the same Internet technology in the 1980s and 1990s in a completely different way. Back then the lack of state control was seen as desirable. Instead of controlling and monitoring all aspects of the Internet, a "hands-off" and laissez-faire idea dominated political behavior at the time: the possibilities of democratization through information technologies, the liberalization of authoritarian societies through technology and the free availability of global knowledge. The idea of national control over communications technology was considered innovation-inhibiting, undemocratic and even technically impossible. The topic of this work is the interaction between state power and sovereignty (e.g. political control through information sovereignty) and digital technologies. The research question is: Which process led to the establishment of norms of control and rule (surveillance, censorship, cyber-war) with regard to the medium Internet? Furthermore, the question arises: What are the implications of this change in standards for the fundamental functioning of the Internet? The aim is to examine in detail the thesis of the militarization of cyberspace empirically on the basis of a longitudinal case study using the example of Internet development in the USA since the 1960s. An interdisciplinary and multi-theoretical approach is chosen from constructivist norms research and the Social Construction of Technology approach
    corecore