276 research outputs found

    Experimental Assessment of Wireless Performance Under an Elevated Noise Floor

    Get PDF
    Debugging wireless communication problems in a field installation is difficult, disruptive, and costly. Roaming problems reported after the installation of wireless access points are hard to reproduce since the radio-frequency (RF) environment at an end-user facility such as a data center, hospital, industrial environment, etc. can be uncontrolled, dependent on the radio traffic, the time of day, the location of potentially mobile scatterers, etc. This disclosure describes techniques to reproduce the RF environment found in a given real-life scenario in a controlled RF environment such as a semi-anechoic chamber or a reverberation chamber. Experiments can be easily conducted in order to determine the root cause of failures and to develop and test = solutions

    Debugging Machine Learning Pipelines

    Get PDF
    Machine learning tasks entail the use of complex computational pipelines to reach quantitative and qualitative conclusions. If some of the activities in a pipeline produce erroneous or uninformative outputs, the pipeline may fail or produce incorrect results. Inferring the root cause of failures and unexpected behavior is challenging, usually requiring much human thought, and is both time-consuming and error-prone. We propose a new approach that makes use of iteration and provenance to automatically infer the root causes and derive succinct explanations of failures. Through a detailed experimental evaluation, we assess the cost, precision, and recall of our approach compared to the state of the art. Our source code and experimental data will be available for reproducibility and enhancement.Comment: 10 page

    Root cause analysis of low throughput situations using boosting algorithms and the TreeShap analysis

    Get PDF
    Detecting and diagnosing the root cause of failures in mobile networks is an increasingly demanding and time consuming task, given its technological growing complexity. This paper focuses on predicting and diagnosing low User Downlink (DL) Average Throughput situations, using supervised learning and the Tree Shapley Additive Explanations (SHAP) method. To fulfill this objective, Boosting classification models are used to predict a failure/non-failure binary label. The influence of each counter on the overall model’s predictive performance is performed based on the TreeSHAP method. From the implemen tation of this technique, it is possible to identify the main causes of low throughput, based on the analysis of the most critical counters in fault detection. Furthermore, from the identification of these counters, it is possible to define a system for diagnosing the most probable throughput degradation cause. The described methodology allowed not only to identify and quantify low throughput situations in a live network due to the occurrence of misadjusted configuration parameters, radio problems and network capacity problems, but also to outline a process for solving them.info:eu-repo/semantics/publishedVersio

    Prognostic Launch Vehicle Probability of Failure Assessment Methodology for Conceptual Systems Predicated on Human Causal Factors

    Get PDF
    Lessons learned from past failures of launch vehicle developments and operations were used to create a new method to predict the probability of failure of conceptual systems. Existing methods such as Probabilistic Risk Assessments and Human Risk Assessments were considered but found to be too cumbersome for this type of system-wide application for yet-to-be-flown vehicles. The basis for this methodology were historic databases of past failures, where it was determined that various faulty human-interactions were the predominant root causes of failure rather than deficient component reliabilities evaluated through statistical analysis. This methodology contains an expert scoring part which can be used in either a qualitative or a quantitative mode. The method produces two products: a numerical score of the probability of failure or guidance to program management on critical areas in need of increased focus to improve the probability of success. In order to evaluate the effectiveness of this new method, data from a concluded vehicle program (USAF's Titan IV with the Centaur G-Prime upper stage) was used as a test case. Although the theoretical vs. actual probability of failure was found to be in reasonable agreement (4.46% vs. 6.67% respectively) the underlying sub-root cause scoring had significant disparities attributable to significant organizational changes and acquisitions. Recommendations are made for future applications of this method to ongoing launch vehicle development programs

    Analysis of solder joint failures arisen during the soldering process

    Get PDF
    The paper gives an overview of the analysis methods applied by electronic failure analysis laboratories for detection, localization and in depth analysis of solder joint failures. The paper focuses on failures that arise during the soldering process. Besides the analysis methods case studies and a few failure modes together with their inspection and root causes are also described. Optical microscopy is used for sample documentation and failure localization. X-ray microimaging can be applied to non-destructively inspect hidden joints i.e. BGA (ball grid array), flip-chip, CSP (chip scale package) bump and micro-wire. It can be also used to measure the amount of solder or voids in the joints. Inspection of PWB (printed wiring board) tracks and via metallization can also be carried out by these systems. SAM (scanning acoustic microscopy) is an effective tool to detect and to visualize delaminations or cracks inside electronics packages or assemblies. As failures are in most cases retraceable to material or compo sitional problems, SEM (scanning electron microscopy) together with electron microprobe analysis can be applied to find the root cause of failures. Thorough analysis of a broken solder joint, wetting problem of cut surfaces, delamination and insufficient through-hole solder joints are presented in the paper. By these case studies not only the failure analysis procedure can be demonstrated, but also the root causes of these failures are revealed

    Analisis Penyebab Cacat Produk Furniture Dengan Menggunakan Metode Failure Mode and Effect Analysis (Fmea) Dan Fault Tree Analysis (Fta) (Studi Kasus Pada PT. Ebako Nusantara)

    Full text link
    Analysis Cause Defect Products Furniture By Using The Method Failure Mode And Effect Analysis (FMEA) And Fault Tree Analysis (FTA) (Case Study In PT. Ebako Nusantara). PT. Ebako Nusantara is a company engaged in the field of furniture. The resulting products including chairs, tables, cabinets, and a crib. Production system does is make to order. High rates of defects in production processes that achieve 34.86% of the number of parts that are produced become large problems due to having to reworked products that do not fit the more impact on the length of time the process. According to the problems, the quality control is necessary in order to reduce amount of failed products. Efforts are being made to control the failed product is by using the method of Failure Mode and Effect Analysis (FMEA) and the method of Fault Tree Analysis (FTA) to identify and analyse the failure happened. The purpose of the use of FMEA is to determine where the failure modes that have the highest RPN value by multiplying the severity, occurrence, and detection.. Then the failure modes with RPN values above 100 be come as a top-level event on the diagram FTA. The FTA method is used to find out the root cause of failures that have occurred. At PT. Ebako Nusantara is the failure mode of the RPNnya value above 100 there are two namely bubble with a value of 150 and the size does not correspond with a value of 120. The cause of the failure is happening differenciated into two failures that are caused by the operator and failure due to the machine

    Operational strategies for offshore wind turbines to mitigate failure rate uncertainty on operational costs and revenue

    Get PDF
    Several operational strategies for offshore wind farms have been established and explored in order to improve understanding of operational costs with a focus on heavy lift vessel strategies. Additionally, an investigation into the uncertainty surrounding failure behaviour has been performed identifying the robustness of different strategies. Four operational strategies were considered: fix on fail, batch repair, annual charter and purchase. A range of failure rates have been explored identifying the key cost drivers and under which circumstances an operator would choose to adopt them. When failures are low, the fix on fail and batch strategies perform best and allow flexibility of operating strategy. When failures are high, purchase becomes optimal and is least sensitive to increasing failure rate. Late life failure distributions based on mechanical and electrical components behaviour have been explored. Increased operating costs because of wear-out failures have been quantified. An increase in minor failures principally increase lost revenue costs and can be mitigated by deploying increased maintenance resources. An increase in larger failures primarily increases vessel and repair costs. Adopting a purchase strategy can negate the vessel cost increase; however, significant cost increases are still observed. Maintenance actions requiring the use of heavy lift vessels, currently drive train components and blades are identified as critical for proactive maintenance to minimise overall maintenance costs

    A Big Data Analyzer for Large Trace Logs

    Full text link
    Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents BiDAl, a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center.Comment: 26 pages, 10 figure
    • …
    corecore