441 research outputs found
Network Topology Mapping from Partial Virtual Coordinates and Graph Geodesics
For many important network types (e.g., sensor networks in complex harsh
environments and social networks) physical coordinate systems (e.g.,
Cartesian), and physical distances (e.g., Euclidean), are either difficult to
discern or inapplicable. Accordingly, coordinate systems and characterizations
based on hop-distance measurements, such as Topology Preserving Maps (TPMs) and
Virtual-Coordinate (VC) systems are attractive alternatives to Cartesian
coordinates for many network algorithms. Herein, we present an approach to
recover geometric and topological properties of a network with a small set of
distance measurements. In particular, our approach is a combination of shortest
path (often called geodesic) recovery concepts and low-rank matrix completion,
generalized to the case of hop-distances in graphs. Results for sensor networks
embedded in 2-D and 3-D spaces, as well as a social networks, indicates that
the method can accurately capture the network connectivity with a small set of
measurements. TPM generation can now also be based on various context
appropriate measurements or VC systems, as long as they characterize different
nodes by distances to small sets of random nodes (instead of a set of global
anchors). The proposed method is a significant generalization that allows the
topology to be extracted from a random set of graph shortest paths, making it
applicable in contexts such as social networks where VC generation may not be
possible.Comment: 17 pages, 9 figures. arXiv admin note: substantial text overlap with
arXiv:1712.1006
A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing
Compared to traditional distributed computing environments such as grids,
cloud computing provides a more cost-effective way to deploy scientific
workflows. Each task of a scientific workflow requires several large datasets
that are located in different datacenters from the cloud computing environment,
resulting in serious data transmission delays. Edge computing reduces the data
transmission delays and supports the fixed storing manner for scientific
workflow private datasets, but there is a bottleneck in its storage capacity.
It is a challenge to combine the advantages of both edge computing and cloud
computing to rationalize the data placement of scientific workflow, and
optimize the data transmission time across different datacenters. Traditional
data placement strategies maintain load balancing with a given number of
datacenters, which results in a large data transmission time. In this study, a
self-adaptive discrete particle swarm optimization algorithm with genetic
algorithm operators (GA-DPSO) was proposed to optimize the data transmission
time when placing data for a scientific workflow. This approach considered the
characteristics of data placement combining edge computing and cloud computing.
In addition, it considered the impact factors impacting transmission delay,
such as the band-width between datacenters, the number of edge datacenters, and
the storage capacity of edge datacenters. The crossover operator and mutation
operator of the genetic algorithm were adopted to avoid the premature
convergence of the traditional particle swarm optimization algorithm, which
enhanced the diversity of population evolution and effectively reduced the data
transmission time. The experimental results show that the data placement
strategy based on GA-DPSO can effectively reduce the data transmission time
during workflow execution combining edge computing and cloud computing
DGRec: Graph Neural Network for Recommendation with Diversified Embedding Generation
Graph Neural Network (GNN) based recommender systems have been attracting
more and more attention in recent years due to their excellent performance in
accuracy. Representing user-item interactions as a bipartite graph, a GNN model
generates user and item representations by aggregating embeddings of their
neighbors. However, such an aggregation procedure often accumulates information
purely based on the graph structure, overlooking the redundancy of the
aggregated neighbors and resulting in poor diversity of the recommended list.
In this paper, we propose diversifying GNN-based recommender systems by
directly improving the embedding generation procedure. Particularly, we utilize
the following three modules: submodular neighbor selection to find a subset of
diverse neighbors to aggregate for each GNN node, layer attention to assign
attention weights for each layer, and loss reweighting to focus on the learning
of items belonging to long-tail categories. Blending the three modules into
GNN, we present DGRec(Diversified GNN-based Recommender System) for diversified
recommendation. Experiments on real-world datasets demonstrate that the
proposed method can achieve the best diversity while keeping the accuracy
comparable to state-of-the-art GNN-based recommender systems.Comment: 9 pages, WSDM 202
Delta Building Visualisation and Optimisation
Antud töös loodi õppehoone visualisatsioon Unity mängu mootori kasutades. Lõputöö kirjeldab visualisatsiooni optimeerimist ning simuleeritud inimeste raja leidmise lahendust. Lisaks oli arendatud visualisatsiooni diasin Delta õppehoone visualisatsiooni vaatajate jaoks selleks, et muuta visualisatsiooni kasutajasõbralikumaks ja ilusamaks. Töö käigus oli tehtud optimeerimise ja raja leidmise testimine. Lisaks oli katsetatud visualisatsiooni meeldivus kasutajatele.During this thesis, a visualisation of an academic building model in the Unity game engine was created. The thesis describes the optimisations and pathfinding solution for simulated people, as well as the design principles used to make the visualisation enjoyable for the viewer. The thesis concludes with testing the optimisation and pathfinding, along with verifying if the visualisation was pleasant to watch
Quantum Switches for Gottesman-Kitaev-Preskill Qubit-based All-Photonic Quantum Networks
The Gottesman-Kitaev-Preskill (GKP) code, being information theoretically
near optimal for quantum communication over Gaussian thermal-loss optical
channels, is likely to be the encoding of choice for advanced quantum networks
of the future. Quantum repeaters based on GKP-encoded light have been shown to
support high end-to-end entanglement rates across large distances despite
realistic finite squeezing in GKP code preparation and homodyne detection
inefficiencies. Here, we introduce a quantum switch for GKP-qubit-based quantum
networks, whose architecture involves multiplexed GKP-qubit-based entanglement
link generation with clients, and their all-photonic storage, together enabled
by GKP-qubit graph state resources. For bipartite entanglement distribution
between clients via entanglement swapping, the switch uses a multi-client
generalization of a recently introduced protocol heuristic. Since generating the GKP-qubit graph state
resource is hardware intensive, given a total resource budget and an arbitrary
layout of clients, we address the question of their optimal allocation towards
the different client-pair connections served by the switch such that the sum
throughput of the switch is maximized while also being fair in terms of the
individual entanglement rates. We illustrate our results for an exemplary data
center network, where the data center is a client of a switch and all of its
other clients aim to connect to the data center alone -- a scenario that also
captures the general case of a gateway router connecting a local area network
to a global network. Together with compatible quantum repeaters, our quantum
switch provides a way to realize quantum networks of arbitrary topology.Comment: 13 pages, 8 Figure
Unreliable and resource-constrained decoding
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D
Heuristic algorithms for wireless mesh network planning
x, 131 leaves : ill. ; 29 cmTechnologies like IEEE 802.16j wireless mesh networks are drawing increasing attention of
the research community. Mesh networks are economically viable and may extend services
such as Internet to remote locations. This thesis takes interest into a planning problem in
IEEE 802.16j networks, where we need to establish minimum cost relay and base stations to
cover the bandwidth demand of wireless clients. A special feature of this planning problem
is that any node in this network can send data to at most one node towards the next hop,
thus traffic flow is unsplittable from source to destination.
We study different integer programming formulations of the problem. We propose four
types of heuristic algorithms that uses greedy, local search, variable neighborhood search
and Lagrangian relaxation based approaches for the problem. We evaluate the algorithms
on database of network instances of 500-5000 nodes, some of which are randomly generated
network instances, while other network instances are generated over geometric distribution.
Our experiments show that the proposed algorithms produce satisfactory result
compared to benchmarks produced by generalized optimization problem solver software
Privacy-preserving human mobility and activity modelling
The exponential proliferation of digital trends and worldwide responses to the COVID-19 pandemic thrust the world into digitalization and interconnectedness, pushing increasingly new technologies/devices/applications into the market. More and more intimate data of users are collected for positive analysis purposes of improving living well-being but shared with/without the user's consent, emphasizing the importance of making human mobility and activity models inclusive, private, and fair. In this thesis, I develop and implement advanced methods/algorithms to model human mobility and activity in terms of temporal-context dynamics, multi-occupancy impacts, privacy protection, and fair analysis.
The following research questions have been thoroughly investigated: i) whether the temporal information integrated into the deep learning networks can improve the prediction accuracy in both predicting the next activity and its timing; ii) how is the trade-off between cost and performance when optimizing the sensor network for multiple-occupancy smart homes; iii) whether the malicious purposes such as user re-identification in human mobility modelling could be mitigated by adversarial learning; iv) whether the fairness implications of mobility models and whether privacy-preserving techniques perform equally for different groups of users.
To answer these research questions, I develop different architectures to model human activity and mobility. I first clarify the temporal-context dynamics in human activity modelling and achieve better prediction accuracy by appropriately using the temporal information. I then design a framework MoSen to simulate the interaction dynamics among residents and intelligent environments and generate an effective sensor network strategy. To relieve users' privacy concerns, I design Mo-PAE and show that the privacy of mobility traces attains decent protection at the marginal utility cost. Last but not least, I investigate the relations between fairness and privacy and conclude that while the privacy-aware model guarantees group fairness, it violates the individual fairness criteria.Open Acces
From bench to bedside : the development of a location indicating nasogastric tube
BackgroundNasogastric tubes are frequently used in clinical practice. Correct placement in the stomach must be verified on passing the tube and before every feed or administration of medicine. Current methods of confirming placement are limited and complications related to incorrect placement are well documented. The need for an easy, safe, reliable bedside method for verifying nasogastric tube placement has been identified.AimTo develop a manufactured prototype of an effective, sensitive and reliable nasogastric tube which self-indicates its position and is ready for clinical investigation in patients.MethodsA pH sensitive redox polymer, vitamin K1, was applied to the tip of 40 hand adapted nasogastric tubes (iteration 1) that were then assessed in pH solutions and clinical samples. Results were used to inform the design of manufactured prototype tubes (iteration 2). A total of 60 iteration 2 tubes were prepared and evaluated in a range of fluids, resected stomach tissue, gastric fluid and sputum. Documentation for regulatory approval of the new device was prepared and the intellectual property protected in preparation for licensing with a commercial partner. A User Network was established to inform the design and development of the device.ResultsA total of 100 prototype tubes were evaluated. One third of iteration 1 prototypes and all of iteration 2 manufactured prototypes, generated a measurable current. Variation in the size and nature of the gastric tissue samples limited definitive conclusions that could be drawn from these experiments, but guided design choices in an iterative manner. However experiments with human gastric fluid demonstrated that, using linear sweep voltammetry, zero current potential gave clearer distinction of pH than amperometry in the desired pH range. Patent protection (granted in Australia, USA and Canada and pending in Europe) of the associated intellectual property and completion of the regulatory approvals process enabled negotiations with a number of companies interested in manufacturing the novel medical device for clinical trials. A User Network was established and a range of communication strategies developed to ensure that the development of the device was informed by current experience of lay and professional users.ConclusionThis thesis documents a translational research study in which understanding of electrochemistry was applied to a current clinical problem generating new knowledge. It was demonstrated that, when a redox polymer is applied to the distal tip of a nasogastric tube, the electrochemical reaction can be measured at the proximal end and assessment of the zero current potential distinguishes fluids of different pH values. New understanding of the reality of user involvement in the development of medical devices was generated and a flexible approach of a User Network is advocated. A commercially manufactured device, with appropriate regulatory approvals was produced ready for clinical trials and patents granted or pending across the globe
- …