2,409 research outputs found

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Seismogeodetic Imaging of Active Crustal Faulting

    Get PDF
    Monitoring microseismicity is important for illuminating active faults and for improving our understanding earthquake physics. These tasks are difficult in urban areas where the SNR is poor, and the level of background seismicity is low. One example is the Newport-Inglewood fault (NIFZ), an active fault that transverses the city of Long-Beach (LB). The catalog magnitude of completeness within this area is M=2, about one order of magnitude larger than along other, less instrumented faults in southern California. Since earthquakes obey a power-law distribution according to which for each unit drop in magnitude the number of events increases by a tenfold, reducing the magnitude of completeness along the NIFZ will significantly decrease the time needed for effective monitoring. The LB and Rosecrans experiments provides a unique opportunity for studying seismicity along the NIFZ. These two array contain thousands of vertical geophones deployed for several-months periods along the NIFZ for exploration purposes. The array recordings are dominated by noise sources such as the local airport, highways, and pumping in the nearby oil fields. We utilize array processing techniques to enhance the SNR.We downward continue the recorded wave field to a depth of a few kilometers, which allows us to detect signals whose amplitude is a few percent of the average surface noise. The migrated wave field is back-projected onto a volume beneath the arrays to search for seismic events. The new catalog illuminates the fault structure beneath LB, and allows us to study the depth-dependent transition in earthquake scaling properties. Deep aseismic transients carry valuable information on the physical conditions that prevail at the roots of seismic faults. However, due the limited sensitivity of geodetic networks, details of the spatiotemporal evolution of such transients are not well resolved. To address this problem, we have developed a new technique to jointly infer the distribution of aseismic slip from seismicity and strain data. Our approach relies on Dieterich (1994)'s aftershock model to map observed changes in seismicity rates into stress changes. We apply this technique to study a three month long transient slip event on the Anza segment of the San Jacinto Fault (SJF), triggered by the remote Mw7.2, 2010 El Mayor-Cucapah (EMC) mainshock. The EMC sequence in Anza initiated with ten days of rapid (≈100 times the longterm slip rate), deep (12-17 km) slip, which migrated along the SJF strike. During the following 80 days afterslip remained stationary, thus significantly stressing a segment hosting the impending Mw5.4 Collins Valley mainshock. Remarkably, the cumulative moment due to afterslip induced by the later mainshock is about 10 times larger than the moment corresponding to the mainshock and its aftershocks. Similar to sequences of large earthquakes rupturing fault gaps, afterslip generated by the two mainshocks is spatially complementary. One interpretation is that the stress field due to afterslip early in the sequence determined the spatial extent of the late slip episode. Alternatively, the spatial distribution is the result of strong heterogeneity of frictional properties within the transition zone. Our preferred model suggests that Anza seismicity is primarily induced due to stress transfer from an aseismically slipping principal fault to adjacent subsidiary faults, and that the importance of earthquake interactions for generating seismicity is negligible.</p

    Runtime Adaptation of Scientific Service Workflows

    Get PDF
    Software landscapes are rather subject to change than being complete after having been built. Changes may be caused by a modified customer behavior, the shift to new hardware resources, or otherwise changed requirements. In such situations, several challenges arise. New architectural models have to be designed and implemented, existing software has to be integrated, and, finally, the new software has to be deployed, monitored, and, where appropriate, optimized during runtime under realistic usage scenarios. All of these situations often demand manual intervention, which causes them to be error-prone. This thesis addresses these types of runtime adaptation. Based on service-oriented architectures, an environment is developed that enables the integration of existing software (i.e., the wrapping of legacy software as web services). A workflow modeling tool that aims at an easy-to-use approach by separating the role of the workflow expert and the role of the domain expert. After the development of workflows, tools that observe the executing infrastructure and perform automatic scale-in and scale-out operations are presented. Infrastructure-as-a-Service providers are used to scale the infrastructure in a transparent and cost-efficient way. The deployment of necessary middleware tools is automatically done. The use of a distributed infrastructure can lead to communication problems. In order to keep workflows robust, these exceptional cases need to treated. But, in this way, the process logic of a workflow gets mixed up and bloated with infrastructural details, which yields an increase in its complexity. In this work, a module is presented that can deal automatically with infrastructural faults and that thereby allows to keep the separation of these two layers. When services or their components are hosted in a distributed environment, some requirements need to be addressed at each service separately. Although techniques as object-oriented programming or the usage of design patterns like the interceptor pattern ease the adaptation of service behavior or structures. Still, these methods require to modify the configuration or the implementation of each individual service. On the other side, aspect-oriented programming allows to weave functionality into existing code even without having its source. Since the functionality needs to be woven into the code, it depends on the specific implementation. In a service-oriented architecture, where the implementation of a service is unknown, this approach clearly has its limitations. The request/response aspects presented in this thesis overcome this obstacle and provide a SOA-compliant and new methods to weave functionality into the communication layer of web services. The main contributions of this thesis are the following: Shifting towards a service-oriented architecture: The generic and extensible Legacy Code Description Language and the corresponding framework allow to wrap existing software, e.g., as web services, which afterwards can be composed into a workflow by SimpleBPEL without overburdening the domain expert with technical details that are indeed handled by a workflow expert. Runtime adaption: Based on the standardized Business Process Execution Language an automatic scheduling approach is presented that monitors all used resources and is able to automatically provision new machines in case a scale-out becomes necessary. If the resource's load drops, e.g., because of less workflow executions, a scale-in is also automatically performed. The scheduling algorithm takes the data transfer between the services into account in order to prevent scheduling allocations that eventually increase the workflow's makespan due to unnecessary or disadvantageous data transfers. Furthermore, a multi-objective scheduling algorithm that is based on a genetic algorithm is able to additionally consider cost, in a way that a user can define her own preferences rising from optimized execution times of a workflow and minimized costs. Possible communication errors are automatically detected and, according to certain constraints, corrected. Adaptation of communication: The presented request/response aspects allow to weave functionality into the communication of web services. By defining a pointcut language that only relies on the exchanged documents, the implementation of services must neither be known nor be available. The weaving process itself is modeled using web services. In this way, the concept of request/response aspects is naturally embedded into a service-oriented architecture

    Conjoint Monitoring of the Ocean Bottom offshore Chile Humboldt Organic MattEr Remineralization, Cruise No. SO288, 15.01.2022 – 15.02.2022, Guayaquil (Ecuador) – Valparaiso (Ecuador) COMBO & HOMER

    Get PDF
    Cruise SO288 served two scientific projects. The main objective of the COMBO project was the recovery of three seafloor geodetic networks of the GeoSEA array which were installed on the continental margin and outer rise offshore Iquique in northern Chileduring RV SONNE cruise SO244. This work was flanked by additional seismic and bathymetric surveys to characterize the sub-seafloor structure. The South American subduction system around 21°S has last ruptured in an earthquake in 1877 and wasidentified as a seismic gap prior to the 2014 Iquique earthquake (Mw=8.1). The southern portion of the segment remains unbroken by a recent earthquake and is currently in the latest stage of the interseismic phase of the seismic cycle. The seafloor geodetic measurements of the GeoSEA array provide a way to monitor crustal deformation at high resolution comparable to the satellite-based GPS technique upon which terrestrial geodesy is largely based. The GeoSEA array consists of autonomous seafloor transponders installed on 4 m high tripods. The transponders within an array intercommunicate via acoustic signals for a period of up to three years. Recovery of the GeoSEA array using a remotely operated vehicle (ROV KIEL6000) required dedicated dives in the three network locations on the middle and lower continental slope (AREA1 and AREA3, respectively) and the outer rise of the Nazca plate (AREA2). All 23 GeoSEA transponders were successfully recovered and showed an 100% uptime during the monitoring period.The GeoSEA survey represents the first seafloor geodetic transect across a subduction zone, spanning from the oceanic outer rise to the lower and middle slope of the continental upper plate. The second project, HOMER, focused on biogeochemical and microbiological processes that affect carbon cycling of the Humboldt Current System off Northern Chile down to the deep ocean. For this purpose, water samples were collected for the detailed chemical characterization of organic matter and the activity of microorganisms. The work was complemented by onboard incubations of microbial populations from deep waters with naturally occurring organic matter.Cruise SO288 was the first expedition of RV SONNE back to the Pacific Ocean starting from a South American port during the COVID-19 pandemic. Despite strict safety and health requirements prior to boarding RV SONNE in Guayaquil, several members of the scientific and ship’s crew tested positive to COVID-19 two days after we left port. Containment measures were immediately put to action, flanked by a tight testing regime. Ten days after leaving Guayaquil, we were able to break the chains of infection and the scientific working program commenced

    Towards a Model-Centric Software Testing Life Cycle for Early and Consistent Testing Activities

    Get PDF
    The constant improvement of the available computing power nowadays enables the accomplishment of more and more complex tasks. The resulting implicit increase in the complexity of hardware and software solutions for realizing the desired functionality requires a constant improvement of the development methods used. On the one hand over the last decades the percentage of agile development practices, as well as testdriven development increases. On the other hand, this trend results in the need to reduce the complexity with suitable methods. At this point, the concept of abstraction comes into play, which manifests itself in model-based approaches such as MDSD or MBT. The thesis is motivated by the fact that the earliest possible detection and elimination of faults has a significant influence on product costs. Therefore, a holistic approach is developed in the context of model-driven development, which allows applying testing already in early phases and especially on the model artifacts, i.e. it provides a shift left of the testing activities. To comprehensively address the complexity problem, a modelcentric software testing life cycle is developed that maps the process steps and artifacts of classical testing to the model-level. Therefore, the conceptual basis is first created by putting the available model artifacts of all domains into context. In particular, structural mappings are specified across the included domain-specific model artifacts to establish a sufficient basis for all the process steps of the life cycle. Besides, a flexible metamodel including operational semantics is developed, which enables experts to carry out an abstract test execution on the modellevel. Based on this, approaches for test case management, automated test case generation, evaluation of test cases, and quality verification of test cases are developed. In the context of test case management, a mechanism is realized that enables the selection, prioritization, and reduction of Test Model artifacts usable for test case generation. I.e. a targeted set of test cases is generated satisfying quality criteria like coverage at the model-level. These quality requirements are accomplished by using a mutation-based analysis of the identified test cases, which builds on the model basis. As the last step of the model-centered software testing life cycle two approaches are presented, allowing an abstract execution of the test cases in the model context through structural analysis and a form of model interpretation concerning data flow information. All the approaches for accomplishing the problem are placed in the context of related work, as well as examined for their feasibility by of a prototypical implementation within the Architecture And Analysis Framework. Subsequently, the described approaches and their concepts are evaluated by qualitative as well as quantitative evaluation. Moreover, case studies show the practical applicability of the approach

    Secure, Reliable, and Energy-efficient Phase Change Main Memory

    Get PDF
    Recent trends in supercomputing, shared cloud computing, and “big data” applications are placing ever-greater demands on computing systems and their memories. Such applications can readily saturate memory bandwidth, and often operate on a working set which exceeds the capacity of modern DRAM packages. Unfortunately, this demand is not matched by DRAM technology development. As Moore’s Law slows and Dennard Scaling stops, further density improvements in DRAM and the underlying semiconductor devices are difficult [1]. In anticipation of this limitation, researchers have pursued emerging memory technologies that promise higher density than conventional DRAM devices. One such technology in phase-change memory (PCM) is especially desirable due to its increased density relative to DRAM. However, this nascent memory has outstanding challenges to overcome before it is viable as a DRAM replacement. PCM devices have limited write endurance, and can consume more energy than their DRAM counterparts, necessitating careful control of how and how often they are written. A second challenge is the non-volatile nature of PCM devices; many applications rely on the volatility of DRAM to protect security critical applications and operating system address space between accesses and power cycles. An obvious solution is to encrypt the memory, but the effective randomization of data is at odds with techniques which reduce writes to the underlying memory. This body of work presents three contributions for addressing all challenges simultaneously under the assumption that encryption is required. Using an encryption and encoding technique called CASTLE & TOWERs, PCM can be employed as main memory with up to 30× improvement in device lifetime while opportunistically reducing dynamic energy. A second technique called MACE marries encoding and traditional error-correction schemes providing up to 2.6× improvement in device lifetime alongside a whole-lifetime energy evaluation framework to guide system design. Finally, an architecture called WINDU is presented which supports the application of encoding for an emerging encryption standard with an eye on energy efficiency. Together, these techniques advance the state-of-the-art, and offer a significant step toward the adoption of PCM as a main memory
    • 

    corecore