90 research outputs found

    Unsupervised Intrusion Detection with Cross-Domain Artificial Intelligence Methods

    Get PDF
    Cybercrime is a major concern for corporations, business owners, governments and citizens, and it continues to grow in spite of increasing investments in security and fraud prevention. The main challenges in this research field are: being able to detect unknown attacks, and reducing the false positive ratio. The aim of this research work was to target both problems by leveraging four artificial intelligence techniques. The first technique is a novel unsupervised learning method based on skip-gram modeling. It was designed, developed and tested against a public dataset with popular intrusion patterns. A high accuracy and a low false positive rate were achieved without prior knowledge of attack patterns. The second technique is a novel unsupervised learning method based on topic modeling. It was applied to three related domains (network attacks, payments fraud, IoT malware traffic). A high accuracy was achieved in the three scenarios, even though the malicious activity significantly differs from one domain to the other. The third technique is a novel unsupervised learning method based on deep autoencoders, with feature selection performed by a supervised method, random forest. Obtained results showed that this technique can outperform other similar techniques. The fourth technique is based on an MLP neural network, and is applied to alert reduction in fraud prevention. This method automates manual reviews previously done by human experts, without significantly impacting accuracy

    What broke where for distributed and parallel applications — a whodunit story

    Get PDF
    Detection, diagnosis and mitigation of performance problems in today\u27s large-scale distributed and parallel systems is a difficult task. These large distributed and parallel systems are composed of various complex software and hardware components. When the system experiences some performance or correctness problem, developers struggle to understand the root cause of the problem and fix in a timely manner. In my thesis, I address these three components of the performance problems in computer systems. First, we focus on diagnosing performance problems in large-scale parallel applications running on supercomputers. We developed techniques to localize the performance problem for root-cause analysis. Parallel applications, most of which are complex scientific simulations running in supercomputers, can create up to millions of parallel tasks that run on different machines and communicate using the message passing paradigm. We developed a highly scalable and accurate automated debugging tool called PRODOMETER, which uses sophisticated algorithms to first, create a logical progress dependency graph of the tasks to highlight how the problem spread through the system manifesting as a system-wide performance issue. Second, uses this logical progress dependence graph to identify the task where the problem originated. Finally, PRODOMETER pinpoints the code region corresponding to the origin of the bug. Second, we developed a tool-chain that can detect performance anomaly using machine-learning techniques and can achieve very low false positive rate. Our input-aware performance anomaly detection system consists of a scalable data collection framework to collect performance related metrics from different granularity of code regions, an offline model creation and prediction-error characterization technique, and a threshold based anomaly-detection-engine for production runs. Our system requires few training runs and can handle unknown inputs and parameter combinations by dynamically calibrating the anomaly detection threshold according to the characteristics of the input data and the characteristics of the prediction-error of the models. Third, we developed performance problem mitigation scheme for erasure-coded distributed storage systems. Repair operations of the failed blocks in erasure-coded distributed storage system take really long time in networked constrained data-centers. The reason being, during the repair operation for erasure-coded distributed storage, a lot of data from multiple nodes are gathered into a single node and then a mathematical operation is performed to reconstruct the missing part. This process severely congests the links toward the destination where newly recreated data is to be hosted. We proposed a novel distributed repair technique, called Partial-Parallel-Repair (PPR) that performs this reconstruction in parallel on multiple nodes and eliminates network bottlenecks, and as a result, greatly speeds up the repair process. Fourth, we study how for a class of applications, performance can be improved (or performance problems can be mitigated) by selectively approximating some of the computations. For many applications, the main computation happens inside a loop that can be logically divided into a few temporal segments, we call phases. We found that while approximating the initial phases might severely degrade the quality of the results, approximating the computation for the later phases have very small impact on the final quality of the result. Based on this observation, we developed an optimization framework that for a given budget of quality-loss, would find the best approximation settings for each phase in the execution

    The Next Generation Space Telescope

    Get PDF
    In Space Science in the Twenty-First Century, the Space Science Board of the National Research Council identified high-resolution-interferometry and high-throughput instruments as the imperative new initiatives for NASA in astronomy for the two decades spanning 1995 to 2015. In the optical range, the study recommended an 8 to 16-meter space telescope, destined to be the successor of the Hubble Space Telescope (HST), and to complement the ground-based 8 to 10-meter-class telescopes presently under construction. It might seem too early to start planning for a successor to HST. In fact, we are late. The lead time for such major missions is typically 25 years, and HST has been in the making even longer with its inception dating back to the early 1960s. The maturity of space technology and a more substantial technological base may lead to a shorter time scale for the development of the Next Generation Space Telescope (NGST). Optimistically, one could therefore anticipate that NGST be flown as early as 2010. On the other hand, the planned lifetime of HST is 15 years. So, even under the best circumstances, there will be a five year gap between the end of HST and the start of NGST. The purpose of this first workshop dedicated to NGST was to survey its scientific potential and technical challenges. The three-day meeting brought together 130 astronomers and engineers from government, industry and universities. Participants explored the technologies needed for building and operating the observatory, reviewed the current status and future prospects for astronomical instrumentation, and discussed the launch and space support capabilities likely to be available in the next decade. To focus discussion, the invited speakers were asked to base their presentations on two nominal concepts, a 10-meter telescope in space in high earth orbit, and a 16-meter telescope on the moon. The workshop closed with a panel discussion focused mainly on the scientific case, siting, and the programmatic approach needed to bring NGST into being. The essential points of this panel discussion have been incorporated into a series of recommendations that represent the conclusions of the workshop. Speakers were asked to provide manuscripts of their presentation. Those received were reproduced here with only minor editorial changes. The few missing papers have been replaced by the presentation viewgraphs. The discussion that follows each speaker's paper was derived from the question and answer sheets, or if unavailable, from the tapes of the meeting. In the latter case, the editors have made every effort to faithfully represent the discussion

    Applications Technology Satellite ATS-6 experiment checkout and continuing spacecraft evaluation report

    Get PDF
    The activities of the ATS-6 spacecraft are reviewed. The following subsystems and experiments are summarized: (1) radio beacon experiments; (2) spacecraft attitude precision pointing and slewing adaptive control experiment; (3) satellite instruction television experiment; (4) thermal control subsystem; (5) spacecraft propulsion subsystem; (6) telemetry and control subsystem; (7) millimeter wave experiment; and (8) communications subsystem. The results of performance evaluation of its subsystems and experiments are presented

    User mobility prediction and management using machine learning

    Get PDF
    The next generation mobile networks (NGMNs) are envisioned to overcome current user mobility limitations while improving the network performance. Some of the limitations envisioned for mobility management in the future mobile networks are: addressing the massive traffic growth bottlenecks; providing better quality and experience to end users; supporting ultra high data rates; ensuring ultra low latency, seamless handover (HOs) from one base station (BS) to another, etc. Thus, in order for future networks to manage users mobility through all of the stringent limitations mentioned, artificial intelligence (AI) is deemed to play a key role automating end-to-end process through machine learning (ML). The objectives of this thesis are to explore user mobility predictions and management use-cases using ML. First, background and literature review is presented which covers, current mobile networks overview, and ML-driven applications to enable user’s mobility and management. Followed by the use-cases of mobility prediction in dense mobile networks are analysed and optimised with the use of ML algorithms. The overall framework test accuracy of 91.17% was obtained in comparison to all other mobility prediction algorithms through artificial neural network (ANN). Furthermore, a concept of mobility prediction-based energy consumption is discussed to automate and classify user’s mobility and reduce carbon emissions under smart city transportation achieving 98.82% with k-nearest neighbour (KNN) classifier as an optimal result along with 31.83% energy savings gain. Finally, context-aware handover (HO) skipping scenario is analysed in order to improve over all quality of service (QoS) as a framework of mobility management in next generation networks (NGNs). The framework relies on passenger mobility, trains trajectory, travelling time and frequency, network load and signal ratio data in cardinal directions i.e, North, East, West, and South (NEWS) achieving optimum result of 94.51% through support vector machine (SVM) classifier. These results were fed into HO skipping techniques to analyse, coverage probability, throughput, and HO cost. This work is extended by blockchain-enabled privacy preservation mechanism to provide end-to-end secure platform throughout train passengers mobility

    Precision Pointing Control System (PPCS) system design and analysis

    Get PDF
    The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    High-level services for networks-on-chip

    Get PDF
    Future technology trends envision that next-generation Multiprocessors Systems-on- Chip (MPSoCs) will be composed of a combination of a large number of processing and storage elements interconnected by complex communication architectures. Communication and interconnection between these basic blocks play a role of crucial importance when the number of these elements increases. Enabling reliable communication channels between cores becomes therefore a challenge for system designers. Networks-on-Chip (NoCs) appeared as a strategy for connecting and managing the communication between several design elements and IP blocks, as required in complex Systems-on-Chip (SoCs). The topic can be considered as a multidisciplinary synthesis of multiprocessing, parallel computing, networking, and on- chip communication domains. Networks-on-Chip, in addition to standard communication services, can be employed for providing support for the implementation of system-level services. This dissertation will demonstrate how high-level services can be added to an MPSoC platform by embedding appropriate hardware/software support in the network interfaces (NIs) of the NoC. In this dissertation, the implementation of innovative modules acting in parallel with protocol translation and data transmission in NIs is proposed and evaluated. The modules can support the execution of the high-level services in the NoC at a relatively low cost in terms of area and energy consumption. Three types of services will be addressed and discussed: security, monitoring, and fault tolerance. With respect to the security aspect, this dissertation will discuss the implementation of an innovative data protection mechanism for detecting and preventing illegal accesses to protected memory blocks and/or memory mapped peripherals. The second aspect will be addressed by proposing the implementation of a monitoring system based on programmable multipurpose monitoring probes aimed at detecting NoC internal events and run-time characteristics. As last topic, new architectural solutions for the design of fault tolerant network interfaces will be presented and discussed
    corecore