9,664 research outputs found

    Temperature-Driven Anomaly Detection Methods for Structural Health Monitoring

    Get PDF
    Reported in this thesis is a data-driven anomaly detection method for structural health monitoring which is based on the utilization of temperature-induced variations. Structural anomaly detection should be able to identify meaningful changes in measurements which are due to structural abnormal behaviour. Because, the temperature-induced variations and structural abnormalities may produce significant misinterpretations, the development of solutions to identify a structural anomaly, accounting for temperature influence, from measurements, is a critical procedure to support structural maintenance. A temperature-driven anomaly detection method is proposed, that introduces the idea of blind source separation for extracting thermal response and for further anomaly detection. Two thermal feature extraction methods are employed corresponding to the classification of underdetermined and overdetermined methods. The underdetermined method has the three phases of: (a) mode decomposition by utilising Empirical Mode Decomposition or Ensemble Empirical Mode Decomposition; (b) data reduction by performing Principal Component Analysis (PCA); (c) blind separation by applying Independent Component Analysis (ICA). The overdetermined method has the two stages of the pre-indication according to PCA and the blind separation by the devotion of ICA. Based on the extracted thermal response, the temperature-driven anomaly detection method is later developed in combination with the four methodologies of: Moving Principal Component Analysis (MPCA); Robust Regression Analysis (RRA); One-Class Support Vector Machine (OCSVM); Artificial Neural Network (ANN). Therefore, the proposed temperature-driven anomaly detection methods are designed as Td-MPCA, Td-RRA, Td-OCSVM, and Td-ANN. The proposed thermal feature extraction methods and temperature-driven anomaly detection methods have been investigated in the context of three case studies. The first case is a numerical truss bridge with simulated material stiffness reduction to create levels of damage. The second case is a purpose constructed truss bridge in the Structures Lab at the University of Warwick. The third case study is Ricciolo curved viaduct in Switzerland. Two primary findings can be confirmed from the evaluation results of these three case studies. Firstly, temperature-induced variations can conceal damage information in measurements. Secondly, the detection abilities of temperature-driven methods, which are Td-MPCA, Td-RRA, Td-OCSVM, and Td-ANN, for disclosing slight anomalies in time are more efficient when compared with the current anomaly detection method, which are MPCA, RRA, OCSVM, and ANN. The unique features of the author’s proposed temperature-driven anomaly detection method can be highlighted as follows: (a) it is a data-driven method for extracting features from an unknown structural system. In another word, the prior knowledge of the structural in-service conditions and physical models are not necessary; (b) it is the first time that blind source separation approaches and relative algorithms have been successfully employed for extracting temperature-induced responses; (c) it is a new approach to reliably assess the capability of using temperature-induced responses for anomaly detection

    High speed research system study. Advanced flight deck configuration effects

    Get PDF
    In mid-1991 NASA contracted with industry to study the high-speed civil transport (HSCT) flight deck challenges and assess the benefits, prior to initiating their High Speed Research Program (HSRP) Phase 2 efforts, then scheduled for FY-93. The results of this nine-month effort are presented, and a number of the most significant findings for the specified advanced concepts are highlighted: (1) a no nose-droop configuration; (2) a far forward cockpit location; and (3) advanced crew monitoring and control of complex systems. The results indicate that the no nose-droop configuration is critically dependent upon the design and development of a safe, reliable, and certifiable Synthetic Vision System (SVS). The droop-nose configuration would cause significant weight, performance, and cost penalties. The far forward cockpit location, with the conventional side-by-side seating provides little economic advantage; however, a configuration with a tandem seating arrangement provides a substantial increase in either additional payload (i.e., passengers) or potential downsizing of the vehicle with resulting increases in performance efficiencies and associated reductions in emissions. Without a droop nose, forward external visibility is negated and takeoff/landing guidance and control must rely on the use of the SVS. The technologies enabling such capabilities, which de facto provides for Category 3 all-weather operations on every flight independent of weather, represent a dramatic benefits multiplier in a 2005 global ATM network: both in terms of enhanced economic viability and environmental acceptability

    Accelerating Audio Data Analysis with In-Network Computing

    Get PDF
    Digital transformation will experience massive connections and massive data handling. This will imply a growing demand for computing in communication networks due to network softwarization. Moreover, digital transformation will host very sensitive verticals, requiring high end-to-end reliability and low latency. Accordingly, the emerging concept “in-network computing” has been arising. This means integrating the network communications with computing and also performing computations on the transport path of the network. This can be used to deliver actionable information directly to end users instead of raw data. However, this change of paradigm to in-network computing raises disruptive challenges to the current communication networks. In-network computing (i) expects the network to host general-purpose softwarized network functions and (ii) encourages the packet payload to be modified. Yet, today’s networks are designed to focus on packet forwarding functions, and packet payloads should not be touched in the forwarding path, under the current end-to-end transport mechanisms. This dissertation presents fullstack in-network computing solutions, jointly designed from network and computing perspectives to accelerate data analysis applications, specifically for acoustic data analysis. In the computing domain, two design paradigms of computational logic, namely progressive computing and traffic filtering, are proposed in this dissertation for data reconstruction and feature extraction tasks. Two widely used practical use cases, Blind Source Separation (BSS) and anomaly detection, are selected to demonstrate the design of computing modules for data reconstruction and feature extraction tasks in the in-network computing scheme, respectively. Following these two design paradigms of progressive computing and traffic filtering, this dissertation designs two computing modules: progressive ICA (pICA) and You only hear once (Yoho) for BSS and anomaly detection, respectively. These lightweight computing modules can cooperatively perform computational tasks along the forwarding path. In this way, computational virtual functions can be introduced into the network, addressing the first challenge mentioned above, namely that the network should be able to host general-purpose softwarized network functions. In this dissertation, quantitative simulations have shown that the computing time of pICA and Yoho in in-network computing scenarios is significantly reduced, since pICA and Yoho are performed, simultaneously with the data forwarding. At the same time, pICA guarantees the same computing accuracy, and Yoho’s computing accuracy is improved. Furthermore, this dissertation proposes a stateful transport module in the network domain to support in-network computing under the end-to-end transport architecture. The stateful transport module extends the IP packet header, so that network packets carry message-related metadata (message-based packaging). Additionally, the forwarding layer of the network device is optimized to be able to process the packet payload based on the computational state (state-based transport component). The second challenge posed by in-network computing has been tackled by supporting the modification of packet payloads. The two computational modules mentioned above and the stateful transport module form the designed in-network computing solutions. By merging pICA and Yoho with the stateful transport module, respectively, two emulation systems, i.e., in-network pICA and in-network Yoho, have been implemented in the Communication Networks Emulator (ComNetsEmu). Through quantitative emulations, the experimental results showed that in-network pICA accelerates the overall service time of BSS by up to 32.18%. On the other hand, using in-network Yoho accelerates the overall service time of anomaly detection by a maximum of 30.51%. These are promising results for the design and actual realization of future communication networks

    Listening for Sirens: Locating and Classifying Acoustic Alarms in City Scenes

    Get PDF
    This paper is about alerting acoustic event detection and sound source localisation in an urban scenario. Specifically, we are interested in spotting the presence of horns, and sirens of emergency vehicles. In order to obtain a reliable system able to operate robustly despite the presence of traffic noise, which can be copious, unstructured and unpredictable, we propose to treat the spectrograms of incoming stereo signals as images, and apply semantic segmentation, based on a Unet architecture, to extract the target sound from the background noise. In a multi-task learning scheme, together with signal denoising, we perform acoustic event classification to identify the nature of the alerting sound. Lastly, we use the denoised signals to localise the acoustic source on the horizon plane, by regressing the direction of arrival of the sound through a CNN architecture. Our experimental evaluation shows an average classification rate of 94%, and a median absolute error on the localisation of 7.5{\deg} when operating on audio frames of 0.5s, and of 2.5{\deg} when operating on frames of 2.5s. The system offers excellent performance in particularly challenging scenarios, where the noise level is remarkably high.Comment: 6 pages, 9 figure

    Output-Only Damage Detection of Steel Beam Using Moving Average Filter

    Get PDF

    Interoperability and Quality Assurance for Multi-Vendor LTE Network

    Full text link
    The deployment of the LTE is picking up pace in many countries and these networks are deployed alongside the existing 2G/3G services. LTE/LTE-A networks offer higher data rates and reduced delay to the subscribers. Today's mobile networks consist of equipment from multiple vendors and they are called multiple vendor networks. Interoperability testing is important at initial network launch and during network expansion. This paper discusses a typical problem related to interoperability testing along with the test results and the issues faced during the testing. The test results discussed in the paper are obtained from three scenarios - before testing, during testing and after testing. The test results are used to study the impact on network performance. Apart from the interoperability testing, an outline of testing that focus on general network stability, the interworking capability of LTE with other technologies such as 2G and 3G and taxonomy for the generation of key performance indicators (KPIs) are also discussed

    Role based behavior analysis

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009Nos nossos dias, o sucesso de uma empresa depende da sua agilidade e capacidade de se adaptar a condições que se alteram rapidamente. Dois requisitos para esse sucesso são trabalhadores proactivos e uma infra-estrutura ágil de Tecnologias de Informacão/Sistemas de Informação (TI/SI) que os consiga suportar. No entanto, isto nem sempre sucede. Os requisitos dos utilizadores ao nível da rede podem nao ser completamente conhecidos, o que causa atrasos nas mudanças de local e reorganizações. Além disso, se não houver um conhecimento preciso dos requisitos, a infraestrutura de TI/SI poderá ser utilizada de forma ineficiente, com excessos em algumas áreas e deficiências noutras. Finalmente, incentivar a proactividade não implica acesso completo e sem restrições, uma vez que pode deixar os sistemas vulneráveis a ameaças externas e internas. O objectivo do trabalho descrito nesta tese é desenvolver um sistema que consiga caracterizar o comportamento dos utilizadores do ponto de vista da rede. Propomos uma arquitectura de sistema modular para extrair informação de fluxos de rede etiquetados. O processo é iniciado com a criação de perfis de utilizador a partir da sua informação de fluxos de rede. Depois, perfis com características semelhantes são agrupados automaticamente, originando perfis de grupo. Finalmente, os perfis individuais são comprados com os perfis de grupo, e os que diferem significativamente são marcados como anomalias para análise detalhada posterior. Considerando esta arquitectura, propomos um modelo para descrever o comportamento de rede dos utilizadores e dos grupos. Propomos ainda métodos de visualização que permitem inspeccionar rapidamente toda a informação contida no modelo. O sistema e modelo foram avaliados utilizando um conjunto de dados reais obtidos de um operador de telecomunicações. Os resultados confirmam que os grupos projectam com precisão comportamento semelhante. Além disso, as anomalias foram as esperadas, considerando a população subjacente. Com a informação que este sistema consegue extrair dos dados em bruto, as necessidades de rede dos utilizadores podem sem supridas mais eficazmente, os utilizadores suspeitos são assinalados para posterior análise, conferindo uma vantagem competitiva a qualquer empresa que use este sistema.In our days, the success of a corporation hinges on its agility and ability to adapt to fast changing conditions. Proactive workers and an agile IT/IS infrastructure that can support them is a requirement for this success. Unfortunately, this is not always the case. The user’s network requirements may not be fully understood, which slows down relocation and reorganization. Also, if there is no grasp on the real requirements, the IT/IS infrastructure may not be efficiently used, with waste in some areas and deficiencies in others. Finally, enabling proactivity does not mean full unrestricted access, since this may leave the systems vulnerable to outsider and insider threats. The purpose of the work described on this thesis is to develop a system that can characterize user network behavior. We propose a modular system architecture to extract information from tagged network flows. The system process begins by creating user profiles from their network flows’ information. Then, similar profiles are automatically grouped into clusters, creating role profiles. Finally, the individual profiles are compared against the roles, and the ones that differ significantly are flagged as anomalies for further inspection. Considering this architecture, we propose a model to describe user and role network behavior. We also propose visualization methods to quickly inspect all the information contained in the model. The system and model were evaluated using a real dataset from a large telecommunications operator. The results confirm that the roles accurately map similar behavior. The anomaly results were also expected, considering the underlying population. With the knowledge that the system can extract from the raw data, the users network needs can be better fulfilled, the anomalous users flagged for inspection, giving an edge in agility for any company that uses it

    Resilience Strategies for Network Challenge Detection, Identification and Remediation

    Get PDF
    The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges

    A service-oriented architecture for robust e-voting

    Get PDF
    • …
    corecore