1,263 research outputs found

    Vibration-based damage localisation: Impulse response identification and model updating methods

    Get PDF
    Structural health monitoring has gained more and more interest over the recent decades. As the technology has matured and monitoring systems are employed commercially, the development of more powerful and precise methods is the logical next step in this field. Especially vibration sensor networks with few measurement points combined with utilisation of ambient vibration sources are attractive for practical applications, as this approach promises to be cost-effective while requiring minimal modification to the monitored structures. Since efficient methods for damage detection have already been developed for such sensor networks, the research focus shifts towards extracting more information from the measurement data, in particular to the localisation and quantification of damage. Two main concepts have produced promising results for damage localisation. The first approach involves a mechanical model of the structure, which is used in a model updating scheme to find the damaged areas of the structure. Second, there is a purely data-driven approach, which relies on residuals of vibration estimations to find regions where damage is probable. While much research has been conducted following these two concepts, different approaches are rarely directly compared using the same data sets. Therefore, this thesis presents advanced methods for vibration-based damage localisation using model updating as well as a data-driven method and provides a direct comparison using the same vibration measurement data. The model updating approach presented in this thesis relies on multiobjective optimisation. Hence, the applied numerical optimisation algorithms are presented first. On this basis, the model updating parameterisation and objective function formulation is developed. The data-driven approach employs residuals from vibration estimations obtained using multiple-input finite impulse response filters. Both approaches are then verified using a simulated cantilever beam considering multiple damage scenarios. Finally, experimentally obtained data from an outdoor girder mast structure is used to validate the approaches. In summary, this thesis provides an assessment of model updating and residual-based damage localisation by means of verification and validation cases. It is found that the residual-based method exhibits numerical performance sufficient for real-time applications while providing a high sensitivity towards damage. However, the localisation accuracy is found to be superior using the model updating method

    A survey on run-time power monitors at the edge

    Get PDF
    Effectively managing energy and power consumption is crucial to the success of the design of any computing system, helping mitigate the efficiency obstacles given by the downsizing of the systems while also being a valuable step towards achieving green and sustainable computing. The quality of energy and power management is strongly affected by the prompt availability of reliable and accurate information regarding the power consumption for the different parts composing the target monitored system. At the same time, effective energy and power management are even more critical within the field of devices at the edge, which exponentially proliferated within the past decade with the digital revolution brought by the Internet of things. This manuscript aims to provide a comprehensive conceptual framework to classify the different approaches to implementing run-time power monitors for edge devices that appeared in literature, leading the reader toward the solutions that best fit their application needs and the requirements and constraints of their target computing platforms. Run-time power monitors at the edge are analyzed according to both the power modeling and monitoring implementation aspects, identifying specific quality metrics for both in order to create a consistent and detailed taxonomy that encompasses the vast existing literature and provides a sound reference to the interested reader

    Impacts of coffee fragmented landscapes on biodiversity and microclimate with emerging monitoring technologies

    Get PDF
    Habitat fragmentation and loss are causing biodiversity declines across the globe. As biodiversity is unevenly distributed, with many hotspots located in the tropics, conserving and protecting these areas is important to preserve as many species as possible. Chapter 2 presents an overview of the Ecology of the Atlantic Forest, a highly fragmented biodiversity hotspot. A major driver of habitat fragmentation is agriculture, and in the tropics coffee is major cash crop. Developing methods to monitor biodiversity effectively without labour intensive surveys can help us understand how communities are using fragmented landscapes and better inform management practices that promote biodiversity. Acoustic monitoring offers a promising set of tools to remotely monitor biodiversity. Developments in machine learning offer automatic species detection and classification in certain taxa. Chapters 3 and 4 use acoustic monitoring surveys conducted on fragmented landscapes in the Atlantic Forest to quantify bird and bat communities in forest and coffee matrix, respectively. Chapter 3 shows that acoustic composition can reflect local avian communities. Chapter 4 applies a convolutional neural network (CNN) optimised on UK bat calls to a Brazilian bat dataset to estimate bat diversity and show how bats preferentially use coffee habitats. In addition to monitoring biodiversity, monitoring microclimate forms a key part of climate smart agriculture for climate change mitigation. Coffee agriculture is limited to the tropics, overlapping with biodiverse regions, but is threatened by climate change. This presents a challenge to countries strongly reliant on coffee exports such as Brazil and Nicaragua. Chapter 5 uses data from microclimate weather stations in Nicaragua to demonstrate that sun-coffee management is vulnerable to supraoptimal temperature exposure regardless of local forest cover or elevation.Open Acces

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    A War of Words: The Forms and Functions of Voice-Over in the American World War II Film — An Interdisciplinary Analysis

    Full text link
    Aside from being American World War II films, what else do the following films have in common? The Big Red One; Hacksaw Ridge; Harts War; Mister Roberts; Stalag 17; and The Thin Red Line — all have voice-over in them. These, and hundreds of other war films have voice-overs that are sometimes the thoughts of a fearful soldier; the wry observations of a participant-observer; or the declarations of all-knowing authoritative figures. There are voice-overs blasted out through a ships PA system; as the reading of a heart-breaking letter; or as the words of a dead comrade, heard again in the mind of a haunted soldier. This thesis questions why is voice-over such a recurring phenomenon in these films? Why is it conveyed in so many different forms? What are the terms for those different forms? What are their narrative functions? A core component of this thesis is a new taxonomy of the six distinct forms of voice-over: acousmatic, audioemic, epistolary, objective, omniscient, and subjective. However, the project is more than a structuralist taxonomy that merely serves to identify, and define those forms. It is also a close examination of their narrative functions beyond the unimaginative trope that voice-over in war films is simply a convenient storytelling device. Through interdisciplinarity — combined with a realist framework — I probe the correlations between: the conditions, codification, and suppression of speech within the U.S. military, and the manifestations of that experience through the cinematic device, and genre convention of voice-over. In addition, I present a radically new interpretation of the voice-overs in The Thin Red Line (Terrence Malick, 1998) as being both a choric meta-memorial to James Jones; and a Greek tragedy — with its replication of the stagecraft of Aeschylus, in its use of the cosmic frame, and the inclusion of a collective character, which I have named ‘The Chorus of Unknown Soldiers’. The overall result is a more logical, and nuanced explanation of the forms, functions, and prevalent use of voice-over in the American World War II film

    Myths and Legends in High-Performance Computing

    Full text link
    In this thought-provoking article, we discuss certain myths and legends that are folklore among members of the high-performance computing community. We gathered these myths from conversations at conferences and meetings, product advertisements, papers, and other communications such as tweets, blogs, and news articles within and beyond our community. We believe they represent the zeitgeist of the current era of massive change, driven by the end of many scaling laws such as Dennard scaling and Moore's law. While some laws end, new directions are emerging, such as algorithmic scaling or novel architecture research. Nevertheless, these myths are rarely based on scientific facts, but rather on some evidence or argumentation. In fact, we believe that this is the very reason for the existence of many myths and why they cannot be answered clearly. While it feels like there should be clear answers for each, some may remain endless philosophical debates, such as whether Beethoven was better than Mozart. We would like to see our collection of myths as a discussion of possible new directions for research and industry investment

    On the motion planning & control of nonlinear robotic systems

    Get PDF
    In the last decades, we saw a soaring interest in autonomous robots boosted not only by academia and industry, but also by the ever in- creasing demand from civil users. As a matter of fact, autonomous robots are fast spreading in all aspects of human life, we can see them clean houses, navigate through city traffic, or harvest fruits and vegetables. Almost all commercial drones already exhibit unprecedented and sophisticated skills which makes them suitable for these applications, such as obstacle avoidance, simultaneous localisation and mapping, path planning, visual-inertial odometry, and object tracking. The major limitations of such robotic platforms lie in the limited payload that can carry, in their costs, and in the limited autonomy due to finite battery capability. For this reason researchers start to develop new algorithms able to run even on resource constrained platforms both in terms of computation capabilities and limited types of endowed sensors, focusing especially on very cheap sensors and hardware. The possibility to use a limited number of sensors allowed to scale a lot the UAVs size, while the implementation of new efficient algorithms, performing the same task in lower time, allows for lower autonomy. However, the developed robots are not mature enough to completely operate autonomously without human supervision due to still too big dimensions (especially for aerial vehicles), which make these platforms unsafe for humans, and the high probability of numerical, and decision, errors that robots may make. In this perspective, this thesis aims to review and improve the current state-of-the-art solutions for autonomous navigation from a purely practical point of view. In particular, we deeply focused on the problems of robot control, trajectory planning, environments exploration, and obstacle avoidance

    Advances in Modelling of Rainfall Fields

    Get PDF
    Rainfall is the main input for all hydrological models, such as rainfall–runoff models and the forecasting of landslides triggered by precipitation, with its comprehension being clearly essential for effective water resource management as well. The need to improve the modeling of rainfall fields constitutes a key aspect both for efficiently realizing early warning systems and for carrying out analyses of future scenarios related to occurrences and magnitudes for all induced phenomena. The aim of this Special Issue was hence to provide a collection of innovative contributions for rainfall modeling, focusing on hydrological scales and a context of climate changes. We believe that the contribution from the latest research outcomes presented in this Special Issue can shed novel insights on the comprehension of the hydrological cycle and all the phenomena that are a direct consequence of rainfall. Moreover, all these proposed papers can clearly constitute a valid base of knowledge for improving specific key aspects of rainfall modeling, mainly concerning climate change and how it induces modifications in properties such as magnitude, frequency, duration, and the spatial extension of different types of rainfall fields. The goal should also consider providing useful tools to practitioners for quantifying important design metrics in transient hydrological contexts (quantiles of assigned frequency, hazard functions, intensity–duration–frequency curves, etc.)
    • …
    corecore