192 research outputs found

    Separable Convex Optimization with Nested Lower and Upper Constraints

    Full text link
    We study a convex resource allocation problem in which lower and upper bounds are imposed on partial sums of allocations. This model is linked to a large range of applications, including production planning, speed optimization, stratified sampling, support vector machines, portfolio management, and telecommunications. We propose an efficient gradient-free divide-and-conquer algorithm, which uses monotonicity arguments to generate valid bounds from the recursive calls, and eliminate linking constraints based on the information from sub-problems. This algorithm does not need strict convexity or differentiability. It produces an ϵ\epsilon-approximate solution for the continuous problem in O(nlogmlognBϵ)\mathcal{O}(n \log m \log \frac{n B}{\epsilon}) time and an integer solution in O(nlogmlogB)\mathcal{O}(n \log m \log B) time, where nn is the number of decision variables, mm is the number of constraints, and BB is the resource bound. A complexity of O(nlogm)\mathcal{O}(n \log m) is also achieved for the linear and quadratic cases. These are the best complexities known to date for this important problem class. Our experimental analyses confirm the good performance of the method, which produces optimal solutions for problems with up to 1,000,000 variables in a few seconds. Promising applications to the support vector ordinal regression problem are also investigated

    Data-driven cyber attack detection and mitigation for decentralized wide-area protection and control in smart grids

    Get PDF
    Modern power systems have already evolved into complicated cyber physical systems (CPS), often referred to as smart grids, due to the continuous expansion of the electrical infrastructure, the augmentation of the number of heterogeneous system components and players, and the consequential application of a diversity of information and telecommunication technologies to facilitate the Wide Area Monitoring, Protection and Control (WAMPAC) of the day-to-day power system operation. Because of the reliance on cyber technologies, WAMPAC, among other critical functions, is prone to various malicious cyber attacks. Successful cyber attacks, especially those sabotage the operation of Bulk Electric System (BES), can cause great financial losses and social panics. Application of conventional IT security solutions is indispensable, but it often turns out to be insufficient to mitigate sophisticated attacks that deploy zero-day vulnerabilities or social engineering tactics. To further improve the resilience of the operation of smart grids when facing cyber attacks, it is desirable to make the WAMPAC functions per se capable of detecting various anomalies automatically, carrying out adaptive activity adjustments in time and thus staying unimpaired even under attack. Most of the existing research efforts attempt to achieve this by adding novel functional modules, such as model-based anomaly detectors, to the legacy centralized WAMPAC functions. In contrast, this dissertation investigates the application of data-driven algorithms in cyber attack detection and mitigation within a decentralized architecture aiming at improving the situational awareness and self-adaptiveness of WAMPAC. First part of the research focuses on the decentralization of System Integrity Protection Scheme (SIPS) with Multi-Agent System (MAS), within which the data-driven anomaly detection and optimal adaptive load shedding are further explored. An algorithm named as Support Vector Machine embedded Layered Decision Tree (SVMLDT) is proposed for the anomaly detection, which provides satisfactory detection accuracy as well as decision-making interpretability. The adaptive load shedding is carried out by every agent individually with dynamic programming. The load shedding relies on the load profile propagation among peer agents and the attack adaptiveness is accomplished by maintaining the historical mean of load shedding proportion. Load shedding only takes place after the consensus pertaining to the anomaly detection is achieved among all interconnected agents and it serves the purpose of mitigating certain cyber attacks. The attack resilience of the decentralized SIPS is evaluated using IEEE 39 bus model. It is shown that, unlike the traditional centralized SIPS, the proposed solution is able to carry out the remedial actions under most Denial of Service (DoS) attacks. The second part investigates the clustering based anomalous behavior detection and peer-assisted mitigation for power system generation control. To reduce the dimensionality of the data, three metrics are designed to interpret the behavior conformity of generator within the same balancing area. Semi-supervised K-means clustering and a density sensitive clustering algorithm based on Hieararchical DBSCAN (HDBSCAN) are both applied in clustering in the 3D feature space. Aiming to mitigate the cyber attacks targeting the generation control commands, a peer-assisted strategy is proposed. When the control commands from control center is detected as anomalous, i.e. either missing or the payload of which have been manipulated, the generating unit utilizes the peer data to infer and estimate a new generation adjustment value as replacement. Linear regression is utilized to obtain the relation of control values received by different generating units, Moving Target Defense (MTD) is adopted during the peer selection and 1-dimensional clustering is performed with the inferred control values, which are followed by the final control value estimation. The mitigation strategy proposed requires that generating units can communicate with each other in a peer-to-peer manner. Evaluation results suggest the efficacy of the proposed solution in counteracting data availability and data integrity attacks targeting the generation controls. However, the strategy stays effective only if less than half of the generating units are compromised and it is not able to mitigate cyber attacks targeting the measurements involved in the generation control

    MaSiF: Machine learning guided auto-tuning of parallel skeletons

    Get PDF

    Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Get PDF
    To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM) model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface). The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup

    Silkroad : A system supporting DSM and multiple paradigms in cluster computing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Parallel evolution strategy for protein threading.

    Get PDF
    A protein-sequence folds into a specific shape in order to function in its aqueous state. If the primary sequence of a protein is given, what is its three dimensional structure? This is a long-standing problem in the field of molecular biology and it has large implication to drug design and cure. Among several proposed approaches, protein threading represents one of the most promising technique. The protein threading problem (PTP) is the problem of determining the three-dimensional structure of a given but arbitrary protein sequence from a set of known structures of other proteins. This problem is known to be NP-hard and current computational approaches to threading are time-consuming and data-intensive. In this thesis, we proposed an evolution strategy (ES) based approach for protein threading (EST). We also developed two parallel approaches for the PTP problem and both are parallelizations of our novel EST. The first method, we call SQST-PEST (Single Query Single Template Parallel EST) threads a single query against a single template. We use ES to find the best alignment between the query and the template, and ES is parallelized. The second method, we call SQMT-PEST (Single Query Multiple Templates Parallel EST) to allow for threading a single query against multiple templates within reasonable time. We obtained better results than current comparable approaches, as well as significant reduction in execution time.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .I85. Source: Masters Abstracts International, Volume: 44-03, page: 1403. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    CDCL(Crypto) and Machine Learning based SAT Solvers for Cryptanalysis

    Get PDF
    Over the last two decades, we have seen a dramatic improvement in the efficiency of conflict-driven clause-learning Boolean satisfiability (CDCL SAT) solvers over industrial problems from a variety of applications such as verification, testing, security, and AI. The availability of such powerful general-purpose search tools as the SAT solver has led many researchers to propose SAT-based methods for cryptanalysis, including techniques for finding collisions in hash functions and breaking symmetric encryption schemes. A feature of all of the previously proposed SAT-based cryptanalysis work is that they are \textit{blackbox}, in the sense that the cryptanalysis problem is encoded as a SAT instance and then a CDCL SAT solver is invoked to solve said instance. A weakness of this approach is that the encoding thus generated may be too large for any modern solver to solve it efficiently. Perhaps a more important weakness of this approach is that the solver is in no way specialized or tuned to solve the given instance. Finally, very little work has been done to leverage parallelism in the context of SAT-based cryptanalysis. To address these issues, we developed a set of methods that improve on the state-of-the-art SAT-based cryptanalysis along three fronts. First, we describe an approach called \cdcl (inspired by the CDCL(TT) paradigm) to tailor the internal subroutines of the CDCL SAT solver with domain-specific knowledge about cryptographic primitives. Specifically, we extend the propagation and conflict analysis subroutines of CDCL solvers with specialized codes that have knowledge about the cryptographic primitive being analyzed by the solver. We demonstrate the power of this framework in two cryptanalysis tasks of algebraic fault attack and differential cryptanalysis of SHA-1 and SHA-256 cryptographic hash functions. Second, we propose a machine-learning based parallel SAT solver that performs well on cryptographic problems relative to many state-of-the-art parallel SAT solvers. Finally, we use a formulation of SAT into Bayesian moment matching to address heuristic initialization problem in SAT solvers

    Activities of the Institute for Computer Applications in Science and Engineering (ICASE)

    Get PDF
    This report summarizes research conducted at the Institute for Computer Applications Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 2, 1987 through March 31, 1988

    Linux kernel support for Micro-Heterogeneous Computing

    Get PDF
    Heterogeneous Computing (HC) is a technique speeds the computation of large tasks by utilizing multiple computers or supercomputers, each of which is best suited to a particular type of computation. Micro-Heterogeneous Computing (MHC) has been proposed to bring this practice to individual computers. With the aid of the higher speed, short distance interconnects found within the next generation of personal computers and commodity servers, it should be possible to apply HC to relatively small grain sizes. MHC draws on the observations that in order for a new computing technology to become widely accepted, and cost effective, there must be a suitable abstraction layer that frees the application writers from the need of precise technical knowledge about the system. MHC provides such an abstraction layer, referred to as the MHC framework, which provides automated solutions to many of the problems that must be overcome when utilizing HC. The framework was designed with the goals of user transparency, flexibility, and performance. The problems addressed include matching tasks to devices and scheduling them (collectively known as mapping), dependency analysis, and parallelization of serial code. All of these problems are solved dynamically at run time by the framework whose implementation is discussed herein. In support of this framework, this thesis specifies a format for libraries that provide common functions and free their users from the tasks of code profiling and analytical benchmarking. This thesis provides the first implementation for such an abstraction layer by utilizing the Linux operating system. This thesis provides not only the kernel level support necessary to schedule tasks to hardware, but also implements the entire core framework, with functioning solutions to the problems mentioned above. This thesis provides well-defined interfaces and methods to expand the MHC system with new scheduling heuristics, function libraries, and device drivers

    CAPTCHA Types and Breaking Techniques: Design Issues, Challenges, and Future Research Directions

    Full text link
    The proliferation of the Internet and mobile devices has resulted in malicious bots access to genuine resources and data. Bots may instigate phishing, unauthorized access, denial-of-service, and spoofing attacks to mention a few. Authentication and testing mechanisms to verify the end-users and prohibit malicious programs from infiltrating the services and data are strong defense systems against malicious bots. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is an authentication process to confirm that the user is a human hence, access is granted. This paper provides an in-depth survey on CAPTCHAs and focuses on two main things: (1) a detailed discussion on various CAPTCHA types along with their advantages, disadvantages, and design recommendations, and (2) an in-depth analysis of different CAPTCHA breaking techniques. The survey is based on over two hundred studies on the subject matter conducted since 2003 to date. The analysis reinforces the need to design more attack-resistant CAPTCHAs while keeping their usability intact. The paper also highlights the design challenges and open issues related to CAPTCHAs. Furthermore, it also provides useful recommendations for breaking CAPTCHAs
    corecore