543 research outputs found

    Fault-tolerant fpga for mission-critical applications.

    Get PDF
    One of the devices that play a great role in electronic circuits design, specifically safety-critical design applications, is Field programmable Gate Arrays (FPGAs). This is because of its high performance, re-configurability and low development cost. FPGAs are used in many applications such as data processing, networks, automotive, space and industrial applications. Negative impacts on the reliability of such applications result from moving to smaller feature sizes in the latest FPGA architectures. This increases the need for fault-tolerant techniques to improve reliability and extend system lifetime of FPGA-based applications. In this thesis, two fault-tolerant techniques for FPGA-based applications are proposed with a built-in fault detection region. A low cost fault detection scheme is proposed for detecting faults using the fault detection region used in both schemes. The fault detection scheme primarily detects open faults in the programmable interconnect resources in the FPGAs. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can be detected. For fault recovery, each scheme has its own fault recovery approach. The first approach uses a spare module and a 2-to-1 multiplexer to recover from any fault detected. On the other hand, the second approach recovers from any fault detected using the property of Partial Reconfiguration (PR) in the FPGAs. It relies on identifying a Partially Reconfigurable block (P_b) in the FPGA that is used in the recovery process after the first faulty module is identified in the system. This technique uses only one location to recover from faults in any of the FPGA’s modules and the FPGA interconnects. Simulation results show that both techniques can detect and recover from open faults. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can also be detected. Finally, both techniques require low area overhead

    Big SaaS: The Next Step Beyond Big Data

    Get PDF
    Software-as-a-Service (SaaS) is a model of cloud computing in which software functions are delivered to the users as services. The past few years have witnessed its global flourishing. In the foreseeable future, SaaS applications will integrate with the Internet of Things, Mobile Computing, Big Data, Wireless Sensor Networks, and many other computing and communication technologies to deliver customizable intelligent services to a vast population. This will give rise to an era of what we call Big SaaS systems of unprecedented complexity and scale. They will have huge numbers of tenants/users interrelated in complex ways. The code will be complex too and require Big Data but provide great value to the customer. With these benefits come great societal risks, however, and there are other drawbacks and challenges. For example, it is difficult to ensure the quality of data and metadata obtained from crowdsourcing and to maintain the integrity of conceptual model. Big SaaS applications will also need to evolve continuously. This paper will discuss how to address these challenges at all stages of the software lifecycle

    Single event upset hardened embedded domain specific reconfigurable architecture

    Get PDF

    Efficient Range and Join Query Processing in Massively Distributed Peer-to-Peer Networks

    Get PDF
    Peer-to-peer (P2P) has become a modern distributed computing architecture that supports massively large-scale data management and query processing. Complex query operators such as range operator and join operator are needed by various distributed applications, including content distribution, locality-aware services, computing resource sharing, and many others. This dissertation tackles a number of problems related to range and join query processing in P2P systems: fault-tolerant range query processing under structured P2P architecture, distributed range caching under unstructured P2P architecture, and integration of heterogeneous data under unstructured P2P architecture. To support fault-tolerant range query processing so as to provide strong performance guarantees in the presence of network churn, effective replication schemes are developed at either the overlay network level or the query processing level. To facilitate range query processing, a prefetch-based caching approach is proposed to eliminate the performance bottlenecks incurred by those data items that are not well cached in the network. Finally, a purely decentralized partition-based join query operator is devised to realize bandwidth-efficient join query processing under unstructured P2P architecture. Theoretical analysis and experimental simulations demonstrate the effectiveness of the proposed approaches

    Analysis of Radiation-induced Cross Domain Errors in TMR Architectures on SRAM-based FPGAs

    Get PDF
    SRAM-Based FPGAs represent a low-cost alternative to ASIC device thanks to their high performance and design flexibility. In particular, for aerospace and avionics application fields, SRAM-based FPGAs are increasingly adopted for their configurability features making them a viable solution for long-time applications. However, these fields are characterized by a radiation environment that makes the technology extremely sensitive to radiation-induced Single Event Upsets (SEUs) in the SRAM-based FPGA’s configuration memory. Configuration scrubbing and Triple Modular Redundancy (TMR) have been widely adopted in order to cope with SEU effects. However, modern FPGA devices are characterized by a heterogeneous routing resource distribution and a complex configuration memory mapping causing an increasing sensitivity to Cross Domain Errors affecting the TMR structure. In this paper we developed a new methodology to calculate the reliability of TMR architecture considering the intrinsic characteristics of the new generation of SRAM-based FPGAs. The method includes the analysis of the configuration bit sharing phenomena and of the routing long lines. We experimentally evaluate the method of various benchmark circuits evaluating the Mean Upset To Failure (MUTF). Finally, we used the results of the developed method to implement an improved design achieving 29x improvement of the MUTF

    Web frontend komponenttien laatumalli

    Get PDF
    Web frontend application developers utilize many components in their work that provide functionality required by the application under development. The components are typically written in JavaScript and may have been developed by 3rd parties or inside the company. The quality of the selected components plays a major role in the overall quality of the web frontend application that they are utilized in. Additionally, the component quality affects the desirability of the component in the eyes of the web application developers that might potentially utilize the component in their application. As an implication, the developers of these components want to them to be high-quality and easy to use. Thus, the problems that this thesis is seeking answers to are how to develop easy-to-use high-quality components and how to measure web frontend component quality. This thesis presents the web frontend component quality model as an answer to these problems. The model is based on web frontend development and component characteristics and research on software component quality models. Both are discussed in the literature review part of this thesis. The web frontend component quality model divides the component quality hierarchically to 4 levels that are quality characteristics, quality sub-characteristics, quality attributes and quality measures. Quality characteristics are high-level abstractions of quality such as functionality and usability that are further specified by the sub-characteristics and attributes. The quality measures are concrete instructions on how to measure values for the quality attributes. The web frontend component quality model consists of 6 quality characteristics, 13 quality sub-characteristics, 30 quality attributes and measures for them. The quality model was tested and evaluated by measuring the quality of the report editor component that is developed by Wapice Ltd. The quality evaluation was able to measure values for the quality attributes according to the model. Additionally, numerous suggestions were provided on how to improve the quality of the report editor component implementation and documentation. Among the improvement suggestions were for example, improving the configurability of the component through configurations object and events interface, providing HTML-based documentation and loading type coverage improvement by adding support to CommonJS and AMD module types.Web frontend-sovellusten kehittäjät hyödyntävät työssään monia komponentteja, jotka tarjoavat toiminnallisuuksia kehitettävänä olevaan sovellukseen. Nämä komponentit on usein kirjoitettu JavaScript-ohjelmointikielellä ja ne ovat kolmannen osapuolen tai yrityksen itsensä toteuttamia. Web-sovelluksessa käytettävien komponenttien laatu on suuressa roolissa sovelluksen kokonaislaadun kannalta. Lisäksi komponentin laatu vaikuttaa sen kiinnostavuuteen niiden web-kehittäjien näkökulmasta, jotka voisivat mahdollisesti käyttää sitä omassa sovelluksessaan. Tästä seuraa, että komponenttien kehittäjät haluavat komponenttiensa olevan korkealaatuisia ja helppokäyttöisiä. Ongelmat, joihin tässä diplomityössä etsitään vastauksia ovat, kuinka kehittää helppokäyttöisiä ja korkealaatuisia komponentteja ja kuinka mitata web frontend-komponenttien laatua. Tämä diplomityö esittää vastauksena web frontend-komponenttien laatumallin. Malli pohjautuu web frontend-kehityksen ja komponenttien erityispiirteisiin sekä tutkimuksiin ohjelmistokomponenttien laatumalleista. Molempia aiheita käsitellään tämän diplomityön kirjallisuuskatsaus-osiossa. Web frontend-komponenttien laatumalli jakaa komponenttien laadun hierarkkisesti neljään tasoon, jotka ovat laadun erityispiirteet, laadun alierityispiirteet, laatuominaisuudet ja laatumittaukset. Laadun erityispiirteet ovat korkean tason abstraktioita, kuten toiminnallisuus ja käytettävyys, joita tarkennetaan edelleen ali-erityispiirteillä ja ominaisuuksilla. Laatumittaukset ovat konkreettisia ohjeita laatuominaisuuksien mittaamiseen. Web frontend-komponenttien laatumalli koostuu 6 laadun erityispiirteestä, 13 laadun alierityispiirteestä, 30 laatuominaisuudesta ja niiden mittauksista. Laatumallia testattiin ja arvioitiin mittaamalla raporttieditorikomponentin laatua. Raporttieditorikomponentti on Wapice Oy:n kehittämä. Laatuarviointi suoritettiin mittaamalla arvot laatuominaisuuksille mallin ohjeiden mukaisesti. Lisäksi, mittausten perusteella tuotettiin useita ehdotuksia siihen, kuinka parantaa raporttieditori-komponentin totetuksen ja dokumentaation laatua. Kehitysehdotusten joukossa oli esimerkiksi konfiguroitavuuden kehittäminen toteuttamalla konfiguraatio-objekti ja tapahtumarajapinta komponentille, tarjoamalla HTML-pohjainen dokumentaatio ja komponentin lataustyyppikattavuuden parantaminen tarjoamalla tuki CommonJS- ja AMD-moduulityypeille

    Optimization and Control of Cyber-Physical Vehicle Systems

    Get PDF
    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined

    A Survey on Approximate Multiplier Designs for Energy Efficiency: From Algorithms to Circuits

    Full text link
    Given the stringent requirements of energy efficiency for Internet-of-Things edge devices, approximate multipliers, as a basic component of many processors and accelerators, have been constantly proposed and studied for decades, especially in error-resilient applications. The computation error and energy efficiency largely depend on how and where the approximation is introduced into a design. Thus, this article aims to provide a comprehensive review of the approximation techniques in multiplier designs ranging from algorithms and architectures to circuits. We have implemented representative approximate multiplier designs in each category to understand the impact of the design techniques on accuracy and efficiency. The designs can then be effectively deployed in high-level applications, such as machine learning, to gain energy efficiency at the cost of slight accuracy loss.Comment: 38 pages, 37 figure

    Reliability on ARM Processors Against Soft Errors Through SIHFT Techniques

    Get PDF
    ARM processors are leaders in embedded systems, delivering high-performance computing, power efficiency, and reduced cost. For this reason, there is a relevant interest for its use in the aerospace industry. However, the use of sub-micron technologies has increased the sensitivity to radiation-induced transient faults. Thus, the mitigation of soft errors has become a major concern. Software-Implemented Hardware Fault Tolerance (SIHFT) techniques are a low-cost way to protect processors against soft errors. On the other hand, they cause high overheads in the execution time and memory, which consequently increase the energy consumption. In this work, we implement a set of software techniques based on different redundancy and checking rules. Furthermore, a low-overhead technique to protect the program execution flow is included. Tests are performed using the ARM Cortex-A9 processor. Simulated fault injection campaigns and radiation test with heavy ions have been performed. Results evaluate the trade-offs among fault detection, execution time, and memory footprint. They show significant improvements of the overheads when compared to previously reported techniques.This work was supported in part by CNPq and CAPES, Brazilian agencies

    Web service control of component-based agile manufacturing systems

    Get PDF
    Current global business competition has resulted in significant challenges for manufacturing and production sectors focused on shorter product lifecyc1es, more diverse and customized products as well as cost pressures from competitors and customers. To remain competitive, manufacturers, particularly in automotive industry, require the next generation of manufacturing paradigms supporting flexible and reconfigurable production systems that allow quick system changeovers for various types of products. In addition, closer integration of shop floor and business systems is required as indicated by the research efforts in investigating "Agile and Collaborative Manufacturing Systems" in supporting the production unit throughout the manufacturing lifecycles. The integration of a business enterprise with its shop-floor and lifecycle supply partners is currently only achieved through complex proprietary solutions due to differences in technology, particularly between automation and business systems. The situation is further complicated by the diverse types of automation control devices employed. Recently, the emerging technology of Service Oriented Architecture's (SOA's) and Web Services (WS) has been demonstrated and proved successful in linking business applications. The adoption of this Web Services approach at the automation level, that would enable a seamless integration of business enterprise and a shop-floor system, is an active research topic within the automotive domain. If successful, reconfigurable automation systems formed by a network of collaborative autonomous and open control platform in distributed, loosely coupled manufacturing environment can be realized through a unifying platform of WS interfaces for devices communication. The adoption of SOA- Web Services on embedded automation devices can be achieved employing Device Profile for Web Services (DPWS) protocols which encapsulate device control functionality as provided services (e.g. device I/O operation, device state notification, device discovery) and business application interfaces into physical control components of machining automation. This novel approach supports the possibility of integrating pervasive enterprise applications through unifying Web Services interfaces and neutral Simple Object Access Protocol (SOAP) message communication between control systems and business applications over standard Ethernet-Local Area Networks (LAN's). In addition, the re-configurability of the automation system is enhanced via the utilisation of Web Services throughout an automated control, build, installation, test, maintenance and reuse system lifecycle via device self-discovery provided by the DPWS protocol...cont'd
    corecore