24 research outputs found
Artificial intelligence in cancer target identification and drug discovery
Artificial intelligence is an advanced method to identify novel anticancer targets and discover novel drugs from biology networks because the networks can effectively preserve and quantify the interaction between components of cell systems underlying human diseases such as cancer. Here, we review and discuss how to employ artificial intelligence approaches to identify novel anticancer targets and discover drugs. First, we describe the scope of artificial intelligence biology analysis for novel anticancer target investigations. Second, we review and discuss the basic principles and theory of commonly used network-based and machine learning-based artificial intelligence algorithms. Finally, we showcase the applications of artificial intelligence approaches in cancer target identification and drug discovery. Taken together, the artificial intelligence models have provided us with a quantitative framework to study the relationship between network characteristics and cancer, thereby leading to the identification of potential anticancer targets and the discovery of novel drug candidates
visone - Software for the Analysis and Visualization of Social Networks
We present the software tool visone which combines graph-theoretic methods for the analysis of social networks with tailored means of visualization. Our main contribution is the design of novel graph-layout algorithms which accurately reflect computed analyses results in well-arranged drawings of the networks under consideration. Besides this, we give a detailed description of the design of the software tool and the provided analysis methods
A Network Science perspective of Graph Convolutional Networks: A survey
The mining and exploitation of graph structural information have been the
focal points in the study of complex networks. Traditional structural measures
in Network Science focus on the analysis and modelling of complex networks from
the perspective of network structure, such as the centrality measures, the
clustering coefficient, and motifs and graphlets, and they have become basic
tools for studying and understanding graphs. In comparison, graph neural
networks, especially graph convolutional networks (GCNs), are particularly
effective at integrating node features into graph structures via neighbourhood
aggregation and message passing, and have been shown to significantly improve
the performances in a variety of learning tasks. These two classes of methods
are, however, typically treated separately with limited references to each
other. In this work, aiming to establish relationships between them, we provide
a network science perspective of GCNs. Our novel taxonomy classifies GCNs from
three structural information angles, i.e., the layer-wise message aggregation
scope, the message content, and the overall learning scope. Moreover, as a
prerequisite for reviewing GCNs via a network science perspective, we also
summarise traditional structural measures and propose a new taxonomy for them.
Finally and most importantly, we draw connections between traditional
structural approaches and graph convolutional networks, and discuss potential
directions for future research
Introduction to the Modeling and Analysis of Complex Systems
Keep up to date on Introduction to Modeling and Analysis of Complex Systems at http://bingweb.binghamton.edu/~sayama/textbook/!
Introduction to the Modeling and Analysis of Complex Systems introduces students to mathematical/computational modeling and analysis developed in the emerging interdisciplinary field of Complex Systems Science. Complex systems are systems made of a large number of microscopic components interacting with each other in nontrivial ways. Many real-world systems can be understood as complex systems, where critically important information resides in the relationships between the parts and not necessarily within the parts themselves. This textbook offers an accessible yet technically-oriented introduction to the modeling and analysis of complex systems. The topics covered include: fundamentals of modeling, basics of dynamical systems, discrete-time models, continuous-time models, bifurcations, chaos, cellular automata, continuous field models, static networks, dynamic networks, and agent-based models. Most of these topics are discussed in two chapters, one focusing on computational modeling and the other on mathematical analysis. This unique approach provides a comprehensive view of related concepts and techniques, and allows readers and instructors to flexibly choose relevant materials based on their objectives and needs. Python sample codes are provided for each modeling example.
This textbook is available for purchase in both grayscale and color via Amazon.com and CreateSpace.com.https://knightscholar.geneseo.edu/oer-ost/1013/thumbnail.jp
Complex systems in financial economics: Applications to interbank and stock markets
Complex systems are characterised by strong interaction at the micro level that can induce large changes at the macro level. This thesis applies the theory of complex systems to the interbank market (Part I) and the stock market (Part II). Evidence found in data from the Netherlands and the US makes clear in what sense these markets are complex systems. The observed phenomena are explained by modelling the adaptive behaviour of financial agents, for example how they form their trading relationships or how they choose investment strategies. The applications help to understand the mechanisms behind the emergence of the financial-economic crisis in 2007 and 2008, and relate to the debate on policy measures aiming to prevent a future crisis of this kind
High Performance Large Graph Analytics by Enhancing Locality
Graphs are widely used in a variety of domains for representing entities and their relationship to each other. Graph analytics helps to understand, detect, extract and visualize insightful relationships between different entities. Graph analytics has a wide range of applications in various domains including computational biology, commerce, intelligence, health care and transportation. The breadth of problems that require large graph analytics is growing rapidly resulting in a need for fast and efficient graph processing.
One of the major challenges in graph processing is poor locality of reference. Locality of reference refers to the phenomenon of frequently accessing the same memory location or adjacent memory locations. Applications with poor data locality reduce the effectiveness of the cache memory. They result in large number of cache misses, requiring access to high latency main memory. Therefore, it is essential to have good locality for good performance. Most graph processing applications have highly random memory access patterns. Coupled with the current large sizes of the graphs, they result in poor cache utilization. Additionally, the computation to data access ratio in many graph processing applications is very low, making it difficult to cover the memory latency using computation. It is also challenging to efficiently parallelize most graph applications. Many graphs in real world have unbalanced degree distribution. It is difficult to achieve a balanced workload for such graphs. The parallelism in graph applications is generally fine-grained in nature. This calls for efficient synchronization and communication between the processing units.
Techniques for enhancing locality have been well studied in the context of regular applications like linear algebra. Those techniques are in most cases not applicable to the graph problems. In this dissertation, we propose two techniques for enhancing locality in graph algorithms: access transformation and task-set reduction. Access transformation can be applied to algorithms to improve the spatial locality by changing the random access pattern to sequential access. It is applicable to iterative algorithms that process random vertices/edges in each iteration. The task-set reduction technique can be applied to enhance the temporal locality. It is applicable to algorithms which repeatedly access the same data to perform certain task. Using the two techniques, we propose novel algorithms for three graph problems: k-core decomposition, maximal clique enumeration and triangle listing. We have implemented the algorithms. The results show that these algorithms provide significant improvement in performance and also scale well
Fundamentals of spreading processes in single and multilayer complex networks
Spreading processes have been largely studied in the literature, both
analytically and by means of large-scale numerical simulations. These processes
mainly include the propagation of diseases, rumors and information on top of a
given population. In the last two decades, with the advent of modern network
science, we have witnessed significant advances in this field of research. Here
we review the main theoretical and numerical methods developed for the study of
spreading processes on complex networked systems. Specifically, we formally
define epidemic processes on single and multilayer networks and discuss in
detail the main methods used to perform numerical simulations. Throughout the
review, we classify spreading processes (disease and rumor models) into two
classes according to the nature of time: (i) continuous-time and (ii) cellular
automata approach, where the second one can be further divided into synchronous
and asynchronous updating schemes. Our revision includes the heterogeneous
mean-field, the quenched-mean field, and the pair quenched mean field
approaches, as well as their respective simulation techniques, emphasizing
similarities and differences among the different techniques. The content
presented here offers a whole suite of methods to study epidemic-like processes
in complex networks, both for researchers without previous experience in the
subject and for experts.Comment: Review article. 73 pages, including 24 figure