1,101 research outputs found

    PyCUDA and PyOpenCL: A Scripting-Based Approach to GPU Run-Time Code Generation

    Full text link
    High-performance computing has recently seen a surge of interest in heterogeneous systems, with an emphasis on modern Graphics Processing Units (GPUs). These devices offer tremendous potential for performance and efficiency in important large-scale applications of computational science. However, exploiting this potential can be challenging, as one must adapt to the specialized and rapidly evolving computing environment currently exhibited by GPUs. One way of addressing this challenge is to embrace better techniques and develop tools tailored to their needs. This article presents one simple technique, GPU run-time code generation (RTCG), along with PyCUDA and PyOpenCL, two open-source toolkits that support this technique. In introducing PyCUDA and PyOpenCL, this article proposes the combination of a dynamic, high-level scripting language with the massive performance of a GPU as a compelling two-tiered computing platform, potentially offering significant performance and productivity advantages over conventional single-tier, static systems. The concept of RTCG is simple and easily implemented using existing, robust infrastructure. Nonetheless it is powerful enough to support (and encourage) the creation of custom application-specific tools by its users. The premise of the paper is illustrated by a wide range of examples where the technique has been applied with considerable success.Comment: Submitted to Parallel Computing, Elsevie

    Brain networks under attack : robustness properties and the impact of lesions

    Get PDF
    A growing number of studies approach the brain as a complex network, the so-called ‘connectome’. Adopting this framework, we examine what types or extent of damage the brain can withstand—referred to as network ‘robustness’—and conversely, which kind of distortions can be expected after brain lesions. To this end, we review computational lesion studies and empirical studies investigating network alterations in brain tumour, stroke and traumatic brain injury patients. Common to these three types of focal injury is that there is no unequivocal relationship between the anatomical lesion site and its topological characteristics within the brain network. Furthermore, large-scale network effects of these focal lesions are compared to those of a widely studied multifocal neurodegenerative disorder, Alzheimer’s disease, in which central parts of the connectome are preferentially affected. Results indicate that human brain networks are remarkably resilient to different types of lesions, compared to other types of complex networks such as random or scale-free networks. However, lesion effects have been found to depend critically on the topological position of the lesion. In particular, damage to network hub regions—and especially those connecting different subnetworks—was found to cause the largest disturbances in network organization. Regardless of lesion location, evidence from empirical and computational lesion studies shows that lesions cause significant alterations in global network topology. The direction of these changes though remains to be elucidated. Encouragingly, both empirical and modelling studies have indicated that after focal damage, the connectome carries the potential to recover at least to some extent, with normalization of graph metrics being related to improved behavioural and cognitive functioning. To conclude, we highlight possible clinical implications of these findings, point out several methodological limitations that pertain to the study of brain diseases adopting a network approach, and provide suggestions for future research

    Diagnosis of an EPS module

    Get PDF
    Dissertação apresentada na Faculdade de CiĂȘncias e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia ElectrotĂ©cnica e ComputadoresThis thesis addresses and contextualizes the problem of diagnostic of an Evolvable Production System (EPS). An EPS is a complex and lively entity composed of intelligent modules that interact through bio-inspired mechanisms, to ensure high system availability and seamless reconfiguration. The actual economic situation together with the increasing demand of high quality and low priced customized products imposed a shift in the production policies of enterprises. Shop floors have to become more agile and flexible to accommodate the new production paradigms. Rather than selling products enterprises are establishing a trend of offering services to explore business opportunities. The new production paradigms, potentiated by the advances in Information Technologies (IT), especially in web related standards and technologies as well as the progressive acceptance of the multi-agent systems (MAS) concept and related technologies, envision collections of modules whose individual and collective function adapts and evolves ensuring the fitness and adequacy of the shop floor in tackling profitable but volatile business opportunities. Despite the richness of the interactions and the effort set in modelling them, their potential to favour fault propagation and interference, in these complex environments, has been ignored from a diagnostic point of view. With the increase of distributed and autonomous components that interact in the execution of processes current diagnostic approaches will soon be insufficient. While current system dynamics are complex and to a certain extent unpredictable the adoption of the next generation of approaches and technologies comes at the cost of a yet increased complexity.Whereas most of the research in such distributed industrial systems is focused in the study and establishment of control structures, the problem of diagnosis has been left relatively unattended. There are however significant open challenges in the diagnosis of such modular systems including: understanding fault propagation and ensuring scalability and co-evolution. This work provides an implementation of a state-of-the-art agent-based interaction-oriented architecture compliant with the EPS paradigm that supports the introduction of a new developed diagnostic algorithm that has the ability to cope with the modern manufacturing paradigm challenges and to provide diagnostic analysis that explores the network dimension of multi-agent systems

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR

    Detecting Anomalies From Big Data System Logs

    Get PDF
    Nowadays, big data systems (e.g., Hadoop and Spark) are being widely adopted by many domains for offering effective data solutions, such as manufacturing, healthcare, education, and media. A common problem about big data systems is called anomaly, e.g., a status deviated from normal execution, which decreases the performance of computation or kills running programs. It is becoming a necessity to detect anomalies and analyze their causes. An effective and economical approach is to analyze system logs. Big data systems produce numerous unstructured logs that contain buried valuable information. However manually detecting anomalies from system logs is a tedious and daunting task. This dissertation proposes four approaches that can accurately and automatically analyze anomalies from big data system logs without extra monitoring overhead. Moreover, to detect abnormal tasks in Spark logs and analyze root causes, we design a utility to conduct fault injection and collect logs from multiple compute nodes. (1) Our first method is a statistical-based approach that can locate those abnormal tasks and calculate the weights of factors for analyzing the root causes. In the experiment, four potential root causes are considered, i.e., CPU, memory, network, and disk I/O. The experimental results show that the proposed approach is accurate in detecting abnormal tasks as well as finding the root causes. (2) To give a more reasonable probability result and avoid ad-hoc factor weights calculating, we propose a neural network approach to analyze root causes of abnormal tasks. We leverage General Regression Neural Network (GRNN) to identify root causes for abnormal tasks. The likelihood of reported root causes is presented to users according to the weighted factors by GRNN. (3) To further improve anomaly detection by avoiding feature extraction, we propose a novel approach by leveraging Convolutional Neural Networks (CNN). Our proposed model can automatically learn event relationships in system logs and detect anomaly with high accuracy. Our deep neural network consists of logkey2vec embeddings, three 1D convolutional layers, a dropout layer, and max pooling. According to our experiment, our CNN-based approach has better accuracy compared to other approaches using Long Short-Term Memory (LSTM) and Multilayer Perceptron (MLP) on detecting anomaly in Hadoop DistributedFile System (HDFS) logs. (4) To analyze system logs more accurately, we extend our CNN-based approach with two attention schemes to detect anomalies in system logs. The proposed two attention schemes focus on different features from CNN\u27s output. We evaluate our approaches with several benchmarks, and the attention-based CNN model shows the best performance among all state-of-the-art methods

    A Review on Computational Intelligence Techniques in Cloud and Edge Computing

    Get PDF
    Cloud computing (CC) is a centralized computing paradigm that accumulates resources centrally and provides these resources to users through Internet. Although CC holds a large number of resources, it may not be acceptable by real-time mobile applications, as it is usually far away from users geographically. On the other hand, edge computing (EC), which distributes resources to the network edge, enjoys increasing popularity in the applications with low-latency and high-reliability requirements. EC provides resources in a decentralized manner, which can respond to users’ requirements faster than the normal CC, but with limited computing capacities. As both CC and EC are resource-sensitive, several big issues arise, such as how to conduct job scheduling, resource allocation, and task offloading, which significantly influence the performance of the whole system. To tackle these issues, many optimization problems have been formulated. These optimization problems usually have complex properties, such as non-convexity and NP-hardness, which may not be addressed by the traditional convex optimization-based solutions. Computational intelligence (CI), consisting of a set of nature-inspired computational approaches, recently exhibits great potential in addressing these optimization problems in CC and EC. This article provides an overview of research problems in CC and EC and recent progresses in addressing them with the help of CI techniques. Informative discussions and future research trends are also presented, with the aim of offering insights to the readers and motivating new research directions

    Internet Sensor Grid: Experiences with Passive and Active Instruments

    Full text link
    The Internet is constantly evolving with new emergent behaviours arising; some of them malicious. This paper discusses opportunities and research direction in an Internet sensor grid for malicious behaviour detection, analysis and countermeasures. We use two example sensors as a basis; firstly the honeyclient for malicious server and content identification (i.e. drive-by-downloads, the most prevalent attack vector for client systems) and secondly the network telescope for Internet Background Radiation detection (IBR - which is classified as unsolicited, non-productive traffic that traverses the Internet, often malicious in nature or origin). Large amounts of security data can be collected from such sensors for analysis and federating honeyclient and telescope data provides a worldwide picture of attacks that could enable the provision of countermeasures. In this paper we outline some experiences with these sensors and analyzing network telescope data through Grid computing as part of an “intelligence layer” within the Internet

    SABRE: A bio-inspired fault-tolerant electronic architecture

    Get PDF
    As electronic devices become increasingly complex, ensuring their reliable, fault-free operation is becoming correspondingly more challenging. It can be observed that, in spite of their complexity, biological systems are highly reliable and fault tolerant. Hence, we are motivated to take inspiration for biological systems in the design of electronic ones. In SABRE (self-healing cellular architectures for biologically inspired highly reliable electronic systems), we have designed a bio-inspired fault-tolerant hierarchical architecture for this purpose. As in biology, the foundation for the whole system is cellular in nature, with each cell able to detect faults in its operation and trigger intra-cellular or extra-cellular repair as required. At the next level in the hierarchy, arrays of cells are configured and controlled as function units in a transport triggered architecture (TTA), which is able to perform partial-dynamic reconfiguration to rectify problems that cannot be solved at the cellular level. Each TTA is, in turn, part of a larger multi-processor system which employs coarser grain reconfiguration to tolerate faults that cause a processor to fail. In this paper, we describe the details of operation of each layer of the SABRE hierarchy, and how these layers interact to provide a high systemic level of fault tolerance. © 2013 IOP Publishing Ltd

    Getting Things Done: The Science behind Stress-Free Productivity

    Get PDF
    Allen (2001) proposed the “Getting Things Done” (GTD) method for personal productivity enhancement, and reduction of the stress caused by information overload. This paper argues that recent insights in psychology and cognitive science support and extend GTD’s recommendations. We first summarize GTD with the help of a flowchart. We then review the theories of situated, embodied and distributed cognition that purport to explain how the brain processes information and plans actions in the real world. The conclusion is that the brain heavily relies on the environment, to function as an external memory, a trigger for actions, and a source of affordances, disturbances and feedback. We then show how these principles are practically implemented in GTD, with its focus on organizing tasks into “actionable” external memories, and on opportunistic, situation-dependent execution. Finally, we propose an extension of GTD to support collaborative work, inspired by the concept of stigmergy
    • 

    corecore