686 research outputs found
Self-adaptive node-based PCA encodings
In this paper we propose an algorithm, Simple Hebbian PCA, and prove that it
is able to calculate the principal component analysis (PCA) in a distributed
fashion across nodes. It simplifies existing network structures by removing
intralayer weights, essentially cutting the number of weights that need to be
trained in half
Dimensionality reduction for parametric design exploration
In architectural design, parametric models often include numeric parameters that can be adjusted to explore different design options. The resulting design space can be easily displayed to the user if the number of parameters is low, for example using a simple two or three-dimensional plot. However, visualising the design space of models defined by multiple parameters is not straightforward. In this paper it is shown how dimensionality reduction can assist in this task whilst retaining associativity between input designs in a high-dimensional parameter space. A form of dimensionality reduction based on neural networks, the Self-Organising Map (SOM) is used in combination with Rhino Grasshopper to demonstrate the approach and its potential benefits for human/machine design exploration
Learning image components for object recognition
In order to perform object recognition it is necessary to learn representations of the underlying components of images. Such components correspond to objects, object-parts, or features. Non-negative matrix factorisation is a generative model that has been specifically proposed for finding such meaningful representations of image data, through the use of non-negativity constraints on the factors. This article reports on an empirical investigation of the performance of non-negative matrix factorisation algorithms. It is found that such algorithms need to impose additional constraints on the sparseness of the factors in order to successfully deal with occlusion. However, these constraints can themselves result in these algorithms failing to identify image components under certain conditions. In contrast, a recognition model (a competitive learning neural network algorithm) reliably and accurately learns representations of elementary image features without such constraints
Enhanced web services performance by compression and similarity-based aggregation of SOAP traffic
Many organizations around the world have adopted Web services, server farms hosted by large enterprises, and data centres for various applications. Web services offer several advantages over other communication technologies. However, it still has high latency and often suffers congestion and bottlenecks due to the massive load generated by large numbers of end users for Web service requests. Simple Object Access Protocol (SOAP) is the basic Extensible Markup Language (XML) communication protocol of Web services that is widely used over the Internet. SOAP provides interoperability by establishing access among Web servers and clients from the same or different platforms. However, the verbosity of the XML format and its encoded messages are often larger than the actual payload, causing dense traffic over the network. This thesis is proposing three innovative techniques capable of reducing small, as well as very large, messages. Furthermore, new redundancy-aware SOAP Web message aggregation models (Binary-tree, Two-bit, and One-bit XML status trees) are proposed to enable the Web servers to aggregate SOAP responses, and send them back as one compact aggregated message, thereby reducing the required bandwidth and latency, and improving the overall performance of Web services. Fractal as a mathematical model provides powerful self-similarity measurements for the fragments of regular and irregular geometric objects in their numeric representations. Fractal mathematical parameters are introduced to compute SOAP message similarities that are applied on the numeric representation of SOAP messages. Furthermore, SOAP fractal similarities are developed to devise a new unsupervised auto-clustering technique. Fast fractal similarity based clustering technique is proposed with the aim of speeding up the computations for the selection of similar messages to be aggregated together in order to achieve greater reduction
Position-Aware Subgraph Neural Networks with Data-Efficient Learning
Data-efficient learning on graphs (GEL) is essential in real-world
applications. Existing GEL methods focus on learning useful representations for
nodes, edges, or entire graphs with ``small'' labeled data. But the problem of
data-efficient learning for subgraph prediction has not been explored. The
challenges of this problem lie in the following aspects: 1) It is crucial for
subgraphs to learn positional features to acquire structural information in the
base graph in which they exist. Although the existing subgraph neural network
method is capable of learning disentangled position encodings, the overall
computational complexity is very high. 2) Prevailing graph augmentation methods
for GEL, including rule-based, sample-based, adaptive, and automated methods,
are not suitable for augmenting subgraphs because a subgraph contains fewer
nodes but richer information such as position, neighbor, and structure.
Subgraph augmentation is more susceptible to undesirable perturbations. 3) Only
a small number of nodes in the base graph are contained in subgraphs, which
leads to a potential ``bias'' problem that the subgraph representation learning
is dominated by these ``hot'' nodes. By contrast, the remaining nodes fail to
be fully learned, which reduces the generalization ability of subgraph
representation learning. In this paper, we aim to address the challenges above
and propose a Position-Aware Data-Efficient Learning framework for subgraph
neural networks called PADEL. Specifically, we propose a novel node position
encoding method that is anchor-free, and design a new generative subgraph
augmentation method based on a diffused variational subgraph autoencoder, and
we propose exploratory and exploitable views for subgraph contrastive learning.
Extensive experiment results on three real-world datasets show the superiority
of our proposed method over state-of-the-art baselines.Comment: 9 pages, 7 figures, accepted by WSDM 2
Advances in Functional Encryption
Functional encryption is a novel paradigm for public-key encryption that enables both fine-grained access control and selective computation on encrypted data, as is necessary to protect big, complex data in the cloud. In this thesis, I provide a brief introduction to functional encryption, and an overview of my contributions to the area
Learning Interpretable Models Through Multi-Objective Neural Architecture Search
Monumental advances in deep learning have led to unprecedented achievements
across a multitude of domains. While the performance of deep neural networks is
indubitable, the architectural design and interpretability of such models are
nontrivial. Research has been introduced to automate the design of neural
network architectures through neural architecture search (NAS). Recent progress
has made these methods more pragmatic by exploiting distributed computation and
novel optimization algorithms. However, there is little work in optimizing
architectures for interpretability. To this end, we propose a multi-objective
distributed NAS framework that optimizes for both task performance and
introspection. We leverage the non-dominated sorting genetic algorithm
(NSGA-II) and explainable AI (XAI) techniques to reward architectures that can
be better comprehended by humans. The framework is evaluated on several image
classification datasets. We demonstrate that jointly optimizing for
introspection ability and task error leads to more disentangled architectures
that perform within tolerable error.Comment: 14 pages main text, 5 pages references, 17 pages supplementa
- …