404,619 research outputs found

    3D-SoftChip: A novel 3D vertically integrated adaptive computing system [thesis]

    Get PDF
    At present, as we enter the nano and giga-scaled integrated-circuit era, there are many system design challenges which must be overcome to resolve problems in current systems. The incredibly increased nonrecurring engineering (NRE) cost, abruptly shortened Time-to- Market (ITA) period and ever widening design productive gaps are good examples illustrating the problems in current systems. To cope with these problems, the concept of an Adaptive Computing System is becoming a critical technology for next generation computing systems. The other big problem is an explosion in the interconnection wire requirements in standard planar technology resulting from the very high data-bandwidth requirements demanded for real-time communications and multimedia signal processing. The concept of 3D-vertical integration of 2D planar chips becomes an attractive solution to combat the ever increasing interconnect wire requirements. As a result, this research proposes the concept of a novel 3D integrated adaptive computing system, which we term 3D-ACSoC. The architecture and advanced system design methodology of the proposed 3D-SoftChip as a forthcoming giga-scaled integrated circuit computing system has been introduced, along with high-level system modeling and functional verification in the early design stage using SystemC

    Model-driven development of data intensive applications over cloud resources

    Get PDF
    The proliferation of sensors over the last years has generated large amounts of raw data, forming data streams that need to be processed. In many cases, cloud resources are used for such processing, exploiting their flexibility, but these sensor streaming applications often need to support operational and control actions that have real-time and low-latency requirements that go beyond the cost effective and flexible solutions supported by existing cloud frameworks, such as Apache Kafka, Apache Spark Streaming, or Map-Reduce Streams. In this paper, we describe a model-driven and stepwise refinement methodological approach for streaming applications executed over clouds. The central role is assigned to a set of Petri Net models for specifying functional and non-functional requirements. They support model reuse, and a way to combine formal analysis, simulation, and approximate computation of minimal and maximal boundaries of non-functional requirements when the problem is either mathematically or computationally intractable. We show how our proposal can assist developers in their design and implementation decisions from a performance perspective. Our methodology allows to conduct performance analysis: The methodology is intended for all the engineering process stages, and we can (i) analyse how it can be mapped onto cloud resources, and (ii) obtain key performance indicators, including throughput or economic cost, so that developers are assisted in their development tasks and in their decision taking. In order to illustrate our approach, we make use of the pipelined wavefront array

    Model-driven development of data intensive applications over cloud resources

    Full text link
    The proliferation of sensors over the last years has generated large amounts of raw data, forming data streams that need to be processed. In many cases, cloud resources are used for such processing, exploiting their flexibility, but these sensor streaming applications often need to support operational and control actions that have real-time and low-latency requirements that go beyond the cost effective and flexible solutions supported by existing cloud frameworks, such as Apache Kafka, Apache Spark Streaming, or Map-Reduce Streams. In this paper, we describe a model-driven and stepwise refinement methodological approach for streaming applications executed over clouds. The central role is assigned to a set of Petri Net models for specifying functional and non-functional requirements. They support model reuse, and a way to combine formal analysis, simulation, and approximate computation of minimal and maximal boundaries of non-functional requirements when the problem is either mathematically or computationally intractable. We show how our proposal can assist developers in their design and implementation decisions from a performance perspective. Our methodology allows to conduct performance analysis: The methodology is intended for all the engineering process stages, and we can (i) analyse how it can be mapped onto cloud resources, and (ii) obtain key performance indicators, including throughput or economic cost, so that developers are assisted in their development tasks and in their decision taking. In order to illustrate our approach, we make use of the pipelined wavefront array.Comment: Preprin

    Product Lifecycle Management and the Quest for Sustainable Space Exploration Solutions

    Get PDF
    Product Lifecycle Management (PLM) is an outcome of lean thinking to eliminate waste and increase productivity. PLM is inextricably tied to the systems engineering business philosophy, coupled with a methodology by which personnel, processes and practices, and information technology combine to form an architecture platform for product design, development, manufacturing, operations, and decommissioning. In this model, which is being implemented by the Engineering Directorate at the National Aeronautics and Space Administration's (NASA's) Marshall Space Flight Center, total lifecycle costs are important variables for critical decisionmaking. With the ultimate goal to deliver quality products that meet or exceed requirements on time and within budget, PLM is a powerful tool to shape everything from engineering trade studies and testing goals, to integrated vehicle operations and retirement scenarios. This paper will demonstrate how the Engineering Directorate is implementing PLM as part of an overall strategy to deliver safe, reliable, and affordable space exploration solutions. It has been 30 years since the United States fielded the Space Shuttle. The next generation space transportation system requires a paradigm shift such that digital tools and knowledge management, which are central elements of PLM, are used consistently to maximum effect. The outcome is a better use of scarce resources, along with more focus on stakeholder and customer requirements, as a new portfolio of enabling tools becomes second nature to the workforce. This paper will use the design and manufacturing processes, which have transitioned to digital-based activities, to show how PLM supports the comprehensive systems engineering and integration function. It also will go through a launch countdown scenario where an anomaly is detected to show how the virtual vehicle created from paperless processes will help solve technical challenges and improve the likelihood of launching on schedule, with less hands-on labor needed for processing and troubleshooting. Sustainable space exploration solutions demand that all lifecycle phases be optimized. Adopting PLM, which has been used by the automotive industry for many years, for aerospace applications provides a foundation for strong, disciplined systems engineering and accountable return on investment by making lifecycle considerations variables in an iterative decision-making process. This paper combines the perspectives of the founding father of PLM, along with the experience of Engineering leaders who are implementing these processes and practices real-time. As the nation moves from an industrial-based society to one where information is a valued commodity, future NASA programs and projects will benefit from the experience being gained today for the exploration missions of tomorrow

    Tasking Modeling Language: A toolset for model-based engineering of data-driven software systems

    Get PDF
    The interdisciplinary process of space systems engineering poses challenges for the development of the on-board software. The software integrates components from different domains and organizations and has to fulfill requirements, such as robustness, reliability, and real-time capability. Model-based methods not only help to give a comprehensive overview, but also improve productivity by allowing artifacts to be generated from the model automatically. However, general-purpose modeling languages, such as the Systems Modeling Language~(SysML), are not always adequate because of their ambiguity resulting from their generic nature. Furthermore, sensor data handling, analysis, and processing of data in on-board software requires focus on the system’s data flow and event mechanism. To achieve this, we developed the Tasking Modeling Language~(TML) which allows system engineers to model complex event-driven software systems in a simplified way and to generate software from the model. Type and consistency checks on the formal level help to reduce errors early in the engineering process. TML is focused on data-driven systems and its models are designed to be extended and customized to specific mission requirements. This paper describes the architecture of TML in detail, explains the base technology, the methodology, and the developed domain specific languages~(DSLs). It evaluates the design approach of the software via a case study and presents advantages as well as challenges faced

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    Automated Real-Time Testing (ARTT) for Embedded Control Systems (ECS)

    Get PDF
    Developing real-time automated test systems for embedded control systems has been a real problem. Some engineers and scientists have used customized software and hardware as a solution, which can be very expensive and time consuming to develop. We have discovered how to integrate a suite of commercially available off-the-shelf software tools and hardware to develop a scalable test platform that is capable of performing complete black-box testing for a dual-channel real-time Embedded-PLC-based control system (www.aps.anl.gov). We will discuss how the Vali/Test Pro testing methodology was implemented to structure testing for a personnel safety system with large quantities of requirements and test cases. This work was supported by the U.S. Department of Energy, Basic Energy Sciences, under Contract No. W-31-109-Eng-38.Comment: 6 pages, 8 figures, ICALEPCS 2001, Poster Sessio

    Mathematical and computer modeling of electro-optic systems using a generic modeling approach

    Get PDF
    The conventional approach to modelling electro-optic sensor systems is to develop separate models for individual systems or classes of system, depending on the detector technology employed in the sensor and the application. However, this ignores commonality in design and in components of these systems. A generic approach is presented for modelling a variety of sensor systems operating in the infrared waveband that also allows systems to be modelled with different levels of detail and at different stages of the product lifecycle. The provision of different model types (parametric and image-flow descriptions) within the generic framework can allow valuable insights to be gained

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care
    corecore