189 research outputs found

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    An architecture for embedded system communication

    Get PDF
    Time is a major constraint in the development of most embedded systems. In many cases, the development of embedded software is directly dependent on the development of the embedded systems. This calls for a development framework that enables embedded software and hardware to be developed in parallel. In an attempt to solve the problem, a concept prototype hardware-in-the-loop (HIL) simulation methodology has been proposed and implemented at the Ohio State University for the TMS320LF2407A DSP board. We build on top of that HIL system by rewriting the low level device drivers that allow data and control information to be set simultaneously, thus, creating a software abstraction layer over various devices available on the DSP board. The device drivers allow data access at the processor and the pin level for the devices on the DSP board. This abstraction simulates external devices in a transparent manner using a device driver library that provides the same programming interface to the device simulators as to real devices. Also, it allows for the testing of both real and simulated hardware connected to the DSP board as a part of the embedded system. The main advantages of the framework are rapid prototyping, unit testing and monitoring. We also modify the existing serial line protocol and perform a comparison between the new and the existing protocol and show that the new protocol is efficient for large data transport. This protocol allows for the effective utilization of serial line bandwidth when the DSP board is used for signal processing or voice based applications. We present the virtual testbed as a software development tool. We conclude by exploring the future directions for the applications

    Artificial intelligence : a heuristic search for commercial and management science applications

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1984.MICROFICHE COPY AVAILABLE IN ARCHIVES AND DEWEY.Bibliography: leaves 185-188.by Philip A. Cooper.M.S

    Second CLIPS Conference Proceedings, volume 1

    Get PDF
    Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems

    Near-Memory Address Translation

    Get PDF
    Virtual memory (VM) is a crucial abstraction in modern computer systems at any scale, from handheld devices to datacenters. VM provides programmers the illusion of an always sufficiently large and linear memory, making programming easier. Although the core components of VM have remained largely unchanged since early VM designs, the design constraints and usage patterns of VM have radically shifted from when it was invented. Today, computer systems integrate hundreds of gigabytes to a few terabytes of memory, while tightly integrated heterogeneous computing platforms (e.g., CPUs, GPUs, FPGAs) are becoming increasingly ubiquitous. As there is a clear trend towards extending the CPU's VM to all computing elements in the system for an efficient and easy to use programming model, the continuous demand for faster memory accesses calls for fast translations to terabytes of memory for any computing element in the system. Unfortunately, conventional translation mechanisms fall short of providing fast translations as contemporary memories exceed the reach of today's translation caches, such as TLBs. In this thesis, we provide fundamental insights into the reason why address translation sits on the critical path of accessing memory. We observe that the traditional fully associative flexibility to map any virtual page to any page frame precludes accessing memory before translating. We study the associativity in VM across a variety of scenarios by classifying page faults using the 3C model developed for caches. Our study demonstrates that the full associativity of VM is unnecessary, and only modest associativity is required. We conclude that capacity and compulsory misses---which are unaffected by associativity---dominate, while conflict misses rapidly disappear as the associativity of VM increases. Building on the modest associativity requirements, we propose a distributed memory management unit close to where the data resides to reduce or eliminate the TLB miss penalty

    An investigation into computer and network curricula

    Get PDF
    This thesis consists of a series of internationally published, peer reviewed, journal and conference research papers that analyse the educational and training needs of undergraduate Information Technology (IT) students within the area of Computer and Network Technology (CNT) Education. Research by Maj et al has found that accredited computing science curricula can fail to meet the expectations of employers in the field of CNT: “It was found that none of these students could perform first line maintenance on a Personal Computer (PC) to a professional standard with due regard to safety, both to themselves and the equipment. Neither could they install communication cards, cables and network operating system or manage a population of networked PCs to an acceptable commercial standard without further extensive training. It is noteworthy that none of the students interviewed had ever opened a PC. It is significant that all those interviewed for this study had successfully completed all the units on computer architecture and communication engineering (Maj, Robbins, Shaw, & Duley, 1998). The students\u27 curricula at that time lacked units in which they gained hands-on experience in modern PC hardware or networking skills. This was despite the fact that their computing science course was level one accredited, the highest accreditation level offered by the Australian Computer Society (ACS). The results of the initial survey in Western Australia led to the introduction of two new units within the Computing Science Degree at Edith Cowan University (ECU), Computer Installation & Maintenance (CIM) and Network Installation & Maintenance (NIM) (Maj, Fetherston, Charlesworth, & Robbins, 1998). Uniquely within an Australian university context these new syllabi require students to work on real equipment. Such experience excludes digital circuit investigation, which is still a recommended approach by the Association for Computing Machinery (ACM) for computer architecture units (ACM, 2001, p.97). Instead, the CIM unit employs a top-down approach based initially upon students\u27 everyday experiences, which is more in accordance with constructivist educational theory and practice. These papers propose an alternate model of IT education that helps to accommodate the educational and vocational needs of IT students in the context of continual rapid changes and developments in technology. The ACM have recognised the need for variation noting that: There are many effective ways to organize a curriculum even for a particular set of goals and objectives (Tucker et al., 1991, p.70). A possible major contribution to new knowledge of these papers relates to how high level abstract bandwidth (B-Node) models may contribute to the understanding of why and how computer and networking technology systems have developed over time. Because these models are de-coupled from the underlying technology, which is subject to rapid change, these models may help to future-proof student knowledge and understanding of the ongoing and future development of computer and networking systems. The de-coupling is achieved through abstraction based upon bandwidth or throughput rather than the specific implementation of the underlying technologies. One of the underlying problems is that computing systems tend to change faster than the ability of most educational institutions to respond. Abstraction and the use of B-Node models could help educational models to more quickly respond to changes in the field, and can also help to introduce an element of future-proofing in the education of IT students. The importance of abstraction has been noted by the ACM who state that: Levels of Abstraction: the nature and use of abstraction in computing; the use of abstraction in managing complexity, structuring systems, hiding details, and capturing recurring patterns; the ability to represent an entity or system by abstractions having different levels of detail and specificity (ACM, 1991b). Bloom et al note the importance of abstraction, listing under a heading of: “Knowledge of the universals and abstractions in a field” the objective: Knowledge of the major schemes and patterns by which phenomena and ideas arc organized. These are large structures, theories, and generalizations which dominate a subject or field or problems. These are the highest levels of abstraction and complexity\u27\u27 (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956, p. 203). Abstractions can be applied to computer and networking technology to help provide students with common fundamental concepts regardless of the particular underlying technological implementation to help avoid the rapid redundancy of a detailed knowledge of modem computer and networking technology implementation and hands-on skills acquisition. Again the ACM note that: “Enduring computing concepts include ideas that transcend any specific vendor, package or skill set... While skills are fleeting, fundamental concepts are enduring and provide long lasting benefits to students, critically important in a rapidly changing discipline (ACM, 2001, p.70) These abstractions can also be reinforced by experiential learning to commercial practices. In this context, the other possibly major contribution of new knowledge provided by this thesis is an efficient, scalable and flexible model for assessing hands-on skills and understanding of IT students. This is a form of Competency-Based Assessment (CBA), which has been successfully tested as part of this research and subsequently implemented at ECU. This is the first time within this field that this specific type of research has been undertaken within the university sector within Australia. Hands-on experience and understanding can become outdated hence the need for future proofing provided via B-Nodes models. The three major research questions of this study are: •Is it possible to develop a new, high level abstraction model for use in CNT education? •Is it possible to have CNT curricula that are more directly relevant to both student and employer expectations without suffering from rapid obsolescence? •Can WI effective, efficient and meaningful assessment be undertaken to test students\u27 hands-on skills and understandings? The ACM Special Interest Group on Data Communication (SJGCOMM) workshop report on Computer Networking, Curriculum Designs and Educational Challenges, note a list of teaching approaches: ... the more \u27hands-on\u27 laboratory approach versus the more traditional in-class lecture-based approach; the bottom-up approach towards subject matter verus the top-down approach (Kurose, Leibeherr, Ostermann, & Ott-Boisseau, 2002, para 1). Bandwidth considerations are approached from the PC hardware level and at each of the seven layers of the International Standards Organisation (ISO) Open Systems Interconnection (OSI) reference model. It is believed that this research is of significance to computing education. However, further research is needed

    Applying the Engineering Statechart Formalism to the evaluation of soft real-time in operating systems : a use case tailored modeling and analysis technique

    Get PDF
    Multimedia applications that have emerged in recent years impose unique requirements on an underlying general purpose operating system (GPOS). The suitability of a GPOS for multimedia processing is judged by its soft real-time capabilities. To date, the question of how these capabilities can be assessed has scarcely been addressed: this is a gap in GPOS research. By answering questions on the impacts of the Interrupt Handling Facility (IHF) on the overall soft real-time capabilities of a GPOS, this thesis contributes to the filling of this blank space. The Engineering Statechart Formalism (ESF), a use case tailored formal method of modeling real-world OS, is syntactically and semantically defined. Models of the IHF of selected real-world operating systems are then created by means of this technique. As no appropriate real-time concept fitting the goals of this thesis as yet exists, a suitable definition is constructed. By projecting this system-wide idea to the interrupt subsystem, specific indicators for this subsystem are erived. These indicators are then evaluated by applying formal techniques such as graph-based analysis and temporal logic model checking to the ESF models. Finally, the assertions derived from this evaluation are interpreted with respect to their impacts on real-time multimedia processing in different general purpose operating systems.Multimedia-Anwendungen haben in den letzten Jahren weite Verbreitung erfahren. Solche Anwendungen stellen besondere Anforderungen an das Betriebssystem (BS), auf dem sie ausgeführt werden. Insbesondere Echtzeitfähigkeiten des Betriebssystems sind von Bedeutung, wenn es um seine Eignung für Multimedia-Verarbeitung geht. Bis heute wurde die Frage, wie sich diese Fähigkeiten konkret innerhalb eines BS manifestieren, nur unzureichend untersucht. Die vorliegende Arbeit leistet einen Beitrag zur Füllung dieser Lücke in der BS-Forschung. Die Effekte des Subsystems zur Unterbrechungsbehandlung in BS auf die Echtzeitfähigkeit des Gesamtsystems werden detailliert auf Basis von Modellen dieses Subsystems in verschiedenen BS analysiert. Um eine formale Auswertung zu erlauben, wird eine auf den Anwendungsfall zugeschnittene formale Methode zur BS-Modellierung verwendet. Die spezifizierte Syntax und Semantik dieses Engineering Statechart Formalism (ESF) basieren auf dem klassischen Statechart-Formalismus. Da bislang kein geeigneter Echtzeit-Begriff existiert, wird eine konsistente Definition hergeleitet. Durch die Abbildung dieser sich auf das Gesamtsystem beziehenden Eigenschaft auf die Unterbrechungsbehandlung werden spezifische Indikatoren für dieses Subsystem hergeleitet. Die Ausprägungen dieser Indikatoren für die verschiedenen untersuchten Betriebssyteme werden anhand formaler Methoden wie graphbasierter Analyse und Temporal Logic Model Checking ausgewertet. Die Interpretation der Untersuchungsergebnisse liefert Aussagen über die Effekte der Implementierung der Unterbrechungsbehandlung auf die Echtzeitfähigkeit der untersuchten Betriebssysteme bei der Verarbeitung von multimedialen Daten

    Enabling Hyperscale Web Services

    Full text link
    Modern web services such as social media, online messaging, web search, video streaming, and online banking often support billions of users, requiring data centers that scale to hundreds of thousands of servers, i.e., hyperscale. In fact, the world continues to expect hyperscale computing to drive more futuristic applications such as virtual reality, self-driving cars, conversational AI, and the Internet of Things. This dissertation presents technologies that will enable tomorrow’s web services to meet the world’s expectations. The key challenge in enabling hyperscale web services arises from two important trends. First, over the past few years, there has been a radical shift in hyperscale computing due to an unprecedented growth in data, users, and web service software functionality. Second, modern hardware can no longer support this growth in hyperscale trends due to a decline in hardware performance scaling. To enable this new hyperscale era, hardware architects must become more aware of hyperscale software needs and software researchers can no longer expect unlimited hardware performance scaling. In short, systems researchers can no longer follow the traditional approach of building each layer of the systems stack separately. Instead, they must rethink the synergy between the software and hardware worlds from the ground up. This dissertation establishes such a synergy to enable futuristic hyperscale web services. This dissertation bridges the software and hardware worlds, demonstrating the importance of that bridge in realizing efficient hyperscale web services via solutions that span the systems stack. The specific goal is to design software that is aware of new hardware constraints and architect hardware that efficiently supports new hyperscale software requirements. This dissertation spans two broad thrusts: (1) a software and (2) a hardware thrust to analyze the complex hyperscale design space and use insights from these analyses to design efficient cross-stack solutions for hyperscale computation. In the software thrust, this dissertation contributes uSuite, the first open-source benchmark suite of web services built with a new hyperscale software paradigm, that is used in academia and industry to study hyperscale behaviors. Next, this dissertation uses uSuite to study software threading implications in light of today’s hardware reality, identifying new insights in the age-old research area of software threading. Driven by these insights, this dissertation demonstrates how threading models must be redesigned at hyperscale by presenting an automated approach and tool, uTune, that makes intelligent run-time threading decisions. In the hardware thrust, this dissertation architects both commodity and custom hardware to efficiently support hyperscale software requirements. First, this dissertation characterizes commodity hardware’s shortcomings, revealing insights that influenced commercial CPU designs. Based on these insights, this dissertation presents an approach and tool, SoftSKU, that enables cheap commodity hardware to efficiently support new hyperscale software paradigms, improving the efficiency of real-world web services that serve billions of users, saving millions of dollars, and meaningfully reducing the global carbon footprint. This dissertation also presents a hardware-software co-design, uNotify, that redesigns commodity hardware with minimal modifications by using existing hardware mechanisms more intelligently to overcome new hyperscale overheads. Next, this dissertation characterizes how custom hardware must be designed at hyperscale, resulting in industry-academia benchmarking efforts, commercial hardware changes, and improved software development. Based on this characterization’s insights, this dissertation presents Accelerometer, an analytical model that estimates gains from hardware customization. Multiple hyperscale enterprises and hardware vendors use Accelerometer to make well-informed hardware decisions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169802/1/akshitha_1.pd
    corecore