186 research outputs found

    An enhanced evolutionary algorithm for requested coverage in wireless sensor networks

    Get PDF
    Wireless sensor nodes with specific and new sensing capabilities and application requirements have affected the behaviour of wireless sensor networks and created problems. Placement of the nodes in an application area is a wellknown problem in the field. In addition, high per-node cost as well as need to produce a requested coverage and guaranteed connectivity features is a must in some applications. Conventional deployments and methods of modelling the behaviour of coverage and connectivity cannot satisfy the application needs and increase the network lifetime. Thus, the research designed and developed an effective node deployment evaluation parameter, produced a more efficient node deployment algorithm to reduce cost, and proposed an evolutionary algorithm to increase network lifetime while optimising deployment cost in relation to the requested coverage scheme. This research presents Accumulative Path Reception Rate (APRR) as a new method to evaluate node connectivity in a network. APRR, a node deployment evaluation parameter was used as the quality of routing path from a sensing node to sink node to evaluate the quality of a network deployment strategy. Simulation results showed that the behaviour of the network is close to the prediction of the APRR. Besides that, a discrete imperialist competitive algorithm, an extension of the Imperialist Competitive Algorithm (ICA) evolutionary algorithm was used to produce a network deployment plan according to the requested event detection probability with a more efficient APRR. It was used to reduce deployment cost in comparison to the use of Multi-Objective Evolutionary Algorithm (MOEA) and Multi-Objective Deployment Algorithm (MODA) algorithms. Finally, a Repulsion Force and Bottleneck Handling (RFBH) evolutionary-based algorithm was proposed to prepare a higher APRR and increase network lifetime as well as reduce deployment cost. Experimental results from simulations showed that the lifetime and communication quality of the output network strategies have proven the accuracy of the RFBH algorithm performance

    Hybrid and dynamic static criteria models for test case prioritization of web application regression testing

    Get PDF
    In software testing domain, different techniques and approaches are used to support the process of regression testing in an effective way. The main approaches include test case minimization, test case selection, and test case prioritization. Test case prioritization techniques improve the performance of regression testing by arranging test cases in such a way that maximize fault detection could be achieved in a shorter time. However, the problems for web testing are the timing for executing test cases and the number of fault detected. The aim of this study is to increase the effectiveness of test case prioritization by proposing an approach that could detect faults earlier at a shorter execution time. This research proposed an approach comprising two models: Hybrid Static Criteria Model (HSCM) and Dynamic Weighting Static Criteria Model (DWSCM). Each model applied three criteria: most common HTTP requests in pages, length of HTTP request chains, and dependency of HTTP requests. These criteria are used to prioritize test cases for web application regression testing. The proposed HSCM utilized clustering technique to group test cases. A hybridized technique was proposed to prioritize test cases by relying on assigned test case priorities from the combination of aforementioned criteria. A dynamic weighting scheme of criteria for prioritizing test cases was used to increase fault detection rate. The findings revealed that, the models comprising enhanced of Average Percentage Fault Detection (APFD), yielded the highest APFD of 98% in DWSCM and 87% in HSCM, which have led to improve effectiveness prioritization models. The findings confirmed the ability of the proposed techniques in improving web application regression testing

    Seismic signal segmentation procedure using time-frequency decomposition and statistical modelling

    Get PDF
    In the paper a novel automatic seismic signal segmentation procedure is proposed. This procedure is motivated by analysis of real seismic vibration signals acquired in an underground mine. During regular mining activities in the underground mine one can expect some seismic events which appear just after the mining activity, e.g. blasting procedures, provoked relaxation of rock and some events that are unexpected, like natural rock burst. It often happens that, during one signal realization, several shocks (events) appear. Apart from two main sources of events (i.e. rock burst and blasting), other activities in the mine might also initiate seismic signal recording procedure (for example machine moving nearby the sensor). Obviously, significance of each type of recorded signal is very different, its shape in time domain, energy and frequency structure (i.e. spectrum of the signal) are different. In order to recognize these events automatically, recorded observation should be pre-processed in order to isolate a single event. The problem of signal segmentation is investigated in literature, several application domains might be found. Although, there are just a few works on seismic signal segmentation. In this paper we propose to use a time-frequency decomposition of the signal and model each sub-signal at every frequency bin using statistical methods. Narrowband components are much easier to search for so called structural breakpoint, i.e. time instance when properties of signal significantly change. It is obvious that simple energy-based methods applied to raw signal fail when one event begins before the previous one relaxed. In order to find beginning and end of a single event we propose to use measures based on empirical quantiles estimated for each sub-signal and, finally, aggregate 2D array into 1D probability vector which indicates location where statistical features has switched from one regime to another one. The proposed procedure can be applied in order to improve time domain isolation of single event for the case, when duration of signal acquisition is longer than duration of the event or to isolate single event from sequence of events (recorded for example during blasting)

    Wireless fault tolerances decision using artificial intelligence technique

    Get PDF
    Wireless techniques utilized in industrial applications face significant challenges in preventing noise, collision, and data fusion, particularly when wireless sensors are used to identify and classify fault in real time for protection. This study will focus on the design of integrated wireless fault diagnosis system, which is protecting the induction motor (IM) from the vibration via decrease the speed. The filtering, signal processing, and Artificial Intelligent (AI) techniques are applied to improve the reliability and flexibility to prevent vibration increases on the IM. Wireless sensors of speed and vibration and card decision are designed based on the wireless application via the C++ related to the microcontroller, also, MATLAB coding was utilized to design the signal processing and the AI steps. The system was successful to identify the misalignment fault and dropping the speed when vibrations rising for preventing the damage may be happen on the IM. The vibration value reduced via the system producing response signal proportional with fault values based on modify the main speed signal to dropping the speed of IM

    Toward Fault Adaptive Power Systems in Electric Ships

    Get PDF
    Shipboard Power Systems (SPS) play a significant role in next-generation Navy fleets. With the increasing power demand from propulsion loads, ship service loads, weaponry systems and mission systems, a stable and reliable SPS is critical to support different aspects of ship operation. It also becomes the technology-enabler to improve ship economy, efficiency, reliability, and survivability. Moreover, it is important to improve the reliability and robustness of the SPS while working under different operating conditions to ensure safe and satisfactory operation of the system. This dissertation aims to introduce novel and effective approaches to respond to different types of possible faults in the SPS. According to the type and duration, the possible faults in the Medium Voltage DC (MVDC) SPS have been divided into two main categories: transient and permanent faults. First, in order to manage permanent faults in MVDC SPS, a novel real-time reconfiguration strategy has been proposed. Onboard postault reconfiguration aims to ensure the maximum power/service delivery to the system loads following a fault. This study aims to implement an intelligent real-time reconfiguration algorithm in the RTDS platform through an optimization technique implemented inside the Real-Time Digital Simulator (RTDS). The simulation results demonstrate the effectiveness of the proposed real-time approach to reconfigure the system under different fault situations. Second, a novel approach to mitigate the effect of the unsymmetrical transient AC faults in the MVDC SPS has been proposed. In this dissertation, the application of combined Static Synchronous Compensator (STATCOM)-Super Conducting Fault Current Limiter (SFCL) to improve the stability of the MVDC SPS during transient faults has been investigated. A Fluid Genetic Algorithm (FGA) optimization algorithm is introduced to design the STATCOM\u27s controller. Moreover, a multi-objective optimization problem has been formulated to find the optimal size of SFCL\u27s impedance. In the proposed scheme, STATCOM can assist the SFCL to keep the vital load terminal voltage close to the normal state in an economic sense. The proposed technique provides an acceptable post-disturbance and postault performance to recover the system to its normal situation over the other alternatives

    A Bayesian Approach to Control Loop Performance Diagnosis Incorporating Background Knowledge of Response Information

    Get PDF
    To isolate the problem source degrading the control loop performance, this work focuses on how to incorporate background knowledge into Bayesian inference. In an effort to reduce dependence on the amount of historical data available, we consider a general kind of background knowledge which appears in many applications. The knowledge, known as response information, is about what faults can possibly affect each of the monitors. We show how this knowledge can be translated to constraints on the underlying probability distributions and introduced in the Bayesian diagnosis. In this way, the dimensionality of the observation space is reduced and thus the diagnosis can be more reliable. Furthermore, for the judgments to be consistent, the set of posterior probabilities of each possible abnormality that are computed from different observation subspaces is synthesized to obtain the partially ordered posteriors. The eigenvalue formulation is used on the pairwise comparison matrix. The proposed approach is applied to a diagnosis problem on an oil sand solids handling system, where it is shown how the combination of background knowledge and data enhances the control performance diagnosis even when the abnormality data are sparse in the historical database

    Performance Evaluation of Structured and Unstructured Data in PIG/HADOOP and MONGO-DB Environments

    Get PDF
    The exponential development of data initially exhibited difficulties for prominent organizations, for example, Google, Yahoo, Amazon, Microsoft, Facebook, Twitter and so forth. The size of the information that needs to be handled by cloud applications is developing significantly quicker than storage capacity. This development requires new systems for managing and breaking down data. The term Big Data is used to address large volumes of unstructured (or semi-structured) and structured data that gets created from different applications, messages, weblogs, and online networking. Big Data is data whose size, variety and uncertainty require new supplementary models, procedures, algorithms, and research to manage and extract value and concealed learning from it. To process more information efficiently and skillfully, for analysis parallelism is utilized. To deal with the unstructured and semi-structured information NoSQL database has been presented. Hadoop better serves the Big Data analysis requirements. It is intended to scale up starting from a single server to a large cluster of machines, which has a high level of adaptation to internal failure. Many business and research institutes such as Facebook, Yahoo, Google, and so on had an expanding need to import, store, and analyze dynamic semi-structured data and its metadata. Also, significant development of semi-structured data inside expansive web-based organizations has prompted the formation of NoSQL data collections for flexible sorting and MapReduce for adaptable parallel analysis. They assessed, used and altered Hadoop, the most popular open source execution of MapReduce, for tending to the necessities of various valid analytics problems. These institutes are also utilizing MongoDB, and a report situated NoSQL store. In any case, there is a limited comprehension of the execution trade-offs of using these two innovations. This paper assesses the execution, versatility, and adaptation to an internal failure of utilizing MongoDB and Hadoop, towards the objective of recognizing the correct programming condition for logical data analytics and research. Lately, an expanding number of organizations have developed diverse, distinctive kinds of non-relational databases (such as MongoDB, Cassandra, Hypertable, HBase/ Hadoop, CouchDB and so on), generally referred to as NoSQL databases. The enormous amount of information generated requires an effective system to analyze the data in various scenarios, under various breaking points. In this paper, the objective is to find the break-even point of both Hadoop/Pig and MongoDB and develop a robust environment for data analytics

    Recent Advances in Social Data and Artificial Intelligence 2019

    Get PDF
    The importance and usefulness of subjects and topics involving social data and artificial intelligence are becoming widely recognized. This book contains invited review, expository, and original research articles dealing with, and presenting state-of-the-art accounts pf, the recent advances in the subjects of social data and artificial intelligence, and potentially their links to Cyberspace
    corecore