247,589 research outputs found

    Chatter, process damping, and chip segmentation in turning: A signal processing approach

    Get PDF
    An increasing number of aerospace components are manufactured from titanium and nickel alloys that are difficult to machine due to their thermal and mechanical properties. This limits the metal removal rates that can be achieved from the production process. However, under these machining conditions the phenomenon of process damping can be exploited to help avoid self-excited vibrations known as regenerative chatter. This means that greater widths of cut can be taken so as to increase the metal removal rate, and hence offset the cutting speed restrictions that are imposed by the thermo-mechanical properties of the material. However, there is little or no consensus as to the underlying mechanisms that cause process damping. The present study investigates two process damping mechanisms that have previously been proposed in the machining literature: the tool flank/workpiece interference effect, and the short regenerative effect. A signal processing procedure is employed to identify flank/workpiece interference from experimental data. Meanwhile, the short regenerative model is solved using a new frequency domain approach that yields additional insight into its stabilising effect. However, analysis and signal processing of the experimentally obtained data reveals that neither of these models can fully explain the increases in stability that are observed in practice. Meanwhile, chip segmentation effects were observed in a number of measurements, and it is suggested that segmentation could play an important role in the process-damped chatter stability of these materials

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis

    A comparative study of russian trolls using several machine learning models on twitter data

    Get PDF
    Ever since Russian trolls have been brought into light, their interference in the 2016 US Presidential elections has been monitored and studied thoroughly. These Russian trolls have fake accounts registered on several major social media sites to influence public opinions. Our work involves trying to discover patterns in these tweets and classifying them by using different machine learning approaches such as Support Vector Machines, Word2vec and neural network models, and then creating a benchmark to compare all the different models. Two machine learning models are developed for this purpose. The first one is used to classify any given specific tweet as either troll or non-troll tweet. The second model classifies specific tweets as coming from left trolls or right trolls, based on apparent extreme political orientation. Several kinds of statistical analysis on these tweets are performed based on the tweets and their classifications. Further, an analysis of the machine learning algorithms, using several performance criteria, is presented

    Artificial intelligence driven anomaly detection for big data systems

    Get PDF
    The main goal of this thesis is to contribute to the research on automated performance anomaly detection and interference prediction by implementing Artificial Intelligence (AI) solutions for complex distributed systems, especially for Big Data platforms within cloud computing environments. The late detection and manual resolutions of performance anomalies and system interference in Big Data systems may lead to performance violations and financial penalties. Motivated by this issue, we propose AI-based methodologies for anomaly detection and interference prediction tailored to Big Data and containerized batch platforms to better analyze system performance and effectively utilize computing resources within cloud environments. Therefore, new precise and efficient performance management methods are the key to handling performance anomalies and interference impacts to improve the efficiency of data center resources. The first part of this thesis contributes to performance anomaly detection for in-memory Big Data platforms. We examine the performance of Big Data platforms and justify our choice of selecting the in-memory Apache Spark platform. An artificial neural network-driven methodology is proposed to detect and classify performance anomalies for batch workloads based on the RDD characteristics and operating system monitoring metrics. Our method is evaluated against other popular machine learning algorithms (ML), as well as against four different monitoring datasets. The results prove that our proposed method outperforms other ML methods, typically achieving 98–99% F-scores. Moreover, we prove that a random start instant, a random duration, and overlapped anomalies do not significantly impact the performance of our proposed methodology. The second contribution addresses the challenge of anomaly identification within an in-memory streaming Big Data platform by investigating agile hybrid learning techniques. We develop TRACK (neural neTwoRk Anomaly deteCtion in sparK) and TRACK-Plus, two methods to efficiently train a class of machine learning models for performance anomaly detection using a fixed number of experiments. Our model revolves around using artificial neural networks with Bayesian Optimization (BO) to find the optimal training dataset size and configuration parameters to efficiently train the anomaly detection model to achieve high accuracy. The objective is to accelerate the search process for finding the size of the training dataset, optimizing neural network configurations, and improving the performance of anomaly classification. A validation based on several datasets from a real Apache Spark Streaming system is performed, demonstrating that the proposed methodology can efficiently identify performance anomalies, near-optimal configuration parameters, and a near-optimal training dataset size while reducing the number of experiments up to 75% compared with naïve anomaly detection training. The last contribution overcomes the challenges of predicting completion time of containerized batch jobs and proactively avoiding performance interference by introducing an automated prediction solution to estimate interference among colocated batch jobs within the same computing environment. An AI-driven model is implemented to predict the interference among batch jobs before it occurs within system. Our interference detection model can alleviate and estimate the task slowdown affected by the interference. This model assists the system operators in making an accurate decision to optimize job placement. Our model is agnostic to the business logic internal to each job. Instead, it is learned from system performance data by applying artificial neural networks to establish the completion time prediction of batch jobs within the cloud environments. We compare our model with three other baseline models (queueing-theoretic model, operational analysis, and an empirical method) on historical measurements of job completion time and CPU run-queue size (i.e., the number of active threads in the system). The proposed model captures multithreading, operating system scheduling, sleeping time, and job priorities. A validation based on 4500 experiments based on the DaCapo benchmarking suite was carried out, confirming the predictive efficiency and capabilities of the proposed model by achieving up to 10% MAPE compared with the other models.Open Acces

    Application of Machine Learning and Data Analytics Methods to Detect Interference Effects from Offset Wells

    Get PDF
    The goal of this thesis is to demonstrate that linear-based data-driven models are innovative and robust. They have the potential to forecast well bottom-hole pressure and identify interference effects between wells. Permanent Downhole Gauges (PDGs) provide a continuous real-time record of pressure and temperature in the downhole environment. These real-time downhole measurements of pressure contain information about the reservoir properties and interactions with offset wells. This work presents a methodology to reproduce well bottom-hole pressure behavior quickly and to forecast future behavior using those measurements. It also identifies the influence of offset wells based on flowrate-pressure measurements using linear data analysis methods. In this methodology, we chose linear-based machine learning methods as they are much faster, more robust, and more easily interpreted. Furthermore, we formulate the functional relationship between flowrate and bottom-hole pressure into linear relationships using superposition techniques and physical flow behavior assumptions. Then, without making any further physical assumptions, we regulate process into two stages — training and testing. Training is the regression phase where the flowrates and pressures are correlated using linear machine learning algorithms. Testing is the extrapolation, or forecasting, of the training model to predict well pressure behavior based on a flowrate history. First, to identify offset well interference effects for a selected well, we reproduce the well’s bottom-hole pressure response using only flowrate and time data for that well. Subsequently, we test the influence of offset wells on the selected well’s bottom-hole pressure response by considering the selected well and offset well’s flowrate history one at a time, until we have examined all possible offset wells. By systematically studying the effects of offset wells on the selected well's bottom-hole pressure, we are able to determine the interference of offset wells using only flowrate histories for the considered wells. We validate the methodology by using a synthetic reservoir model whose behavior (connectivity) is known. We reproduce and forecast the pressure behavior of a selected well and determine the influence of offset wells. Then, we compare the identified interference wells with known answers. We note that there is an agreement between the algorithm’s results and synthetic model. Also, we test the methodology on the actual field cases. We observe agreement between identified interference effects from offset wells using linear-based data analytics method and those determined from the interpretation of multi-well tests and dynamic observations

    A Machine Learning Specklegram Wavemeter (MaSWave) Based On A Short Section Of Multimode Fiber As The Dispersive Element

    Get PDF
    Wavemeters are very important for precise and accurate measurements of both pulses and continuous-wave optical sources. Conventional wavemeters employ gratings, prisms, and other wavelength-sensitive devices in their design. Here, we report a simple and low-cost wavemeter based on a section of multimode fiber (MMF). The concept is to correlate the multimodal interference pattern (i.e., speckle patterns or specklegrams) at the end face of an MMF with the wavelength of the input light source. Through a series of experiments, specklegrams from the end face of an MMF as captured by a CCD camera (acting as a low-cost interrogation unit) were analyzed using a convolutional neural network (CNN) model. The developed machine learning specklegram wavemeter (MaSWave) can accurately map specklegrams of wavelengths up to 1 pm resolution when employing a 0.1 m long MMF. Moreover, the CNN was trained with several categories of image datasets (from 10 nm to 1 pm wavelength shifts). In addition, analysis for different step-index and graded-index MMF types was carried out. The work shows how further robustness to the effects of environmental changes (mainly vibrations and temperature changes) can be achieved at the expense of decreased wavelength shift resolution, by employing a shorter length MMF section (e.g., 0.02 m long MMF). In summary, this work demonstrates how a machine learning model can be used for the analysis of specklegrams in the design of a wavemeter

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig
    • …
    corecore