9,054 research outputs found
Standardized Exclusion: A Theory of Barrier Lock-In
The United States has relaxed antitrust scrutiny of private standard-setting organizations in recognition of their potential procompetitive benefits. In the meantime, however, the growing importance of network industries—and the coinciding move toward vendor-led standards consortia—has welcomed new, insidious anticompetitive risks. This Note proffers one such risk: barrier lock-in. A theory of barrier lock-in recognizes that dominant vendors can capture and control standards consortia to keep standardized equipment complex and costly. These practices are exclusionary. This Note situates barrier lock-in within the existing antitrust literature and jurisprudence, provides a potential example of barrier lock-in in the 5G network equipment standardization process, and proposes two solutions for future legislative, executive, and judicial action against misbehaving standard-setters
Aerial Network Assistance Systems for Post-Disaster Scenarios : Topology Monitoring and Communication Support in Infrastructure-Independent Networks
Communication anytime and anywhere is necessary for our modern society to function. However, the critical network infrastructure quickly fails in the face of a disaster and leaves the affected population without means of communication. This lack can be overcome by smartphone-based emergency communication systems, based on infrastructure-independent networks like Delay-Tolerant Networks (DTNs). DTNs, however, suffer from short device-to-device link distances and, thus, require multi-hop routing or data ferries between disjunct parts of the network. In disaster scenarios, this fragmentation is particularly severe because of the highly clustered human mobility behavior. Nevertheless, aerial communication support systems can connect local network clusters by utilizing Unmanned Aerial Vehicles (UAVs) as data ferries. To facilitate situation-aware and adaptive communication support, knowledge of the network topology, the identification of missing communication links, and the constant reassessment of dynamic disasters are required. These requirements are usually neglected, despite existing approaches to aerial monitoring systems capable of detecting devices and networks.
In this dissertation, we, therefore, facilitate the coexistence of aerial topology monitoring and communications support mechanisms in an autonomous Aerial Network Assistance System for infrastructure-independent networks as our first contribution. To enable system adaptations to unknown and dynamic disaster situations, our second contribution addresses the collection, processing, and utilization of topology information. For one thing, we introduce cooperative monitoring approaches to include the DTN in the monitoring process. Furthermore, we apply novel approaches for data aggregation and network cluster estimation to facilitate the continuous assessment of topology information and an appropriate system adaptation. Based on this, we introduce an adaptive topology-aware routing approach to reroute UAVs and increase the coverage of disconnected nodes outside clusters.
We generalize our contributions by integrating them into a simulation framework, creating an evaluation platform for autonomous aerial systems as our third contribution. We further increase the expressiveness of our aerial system evaluation, by adding movement models for multicopter aircraft combined with power consumption models based on real-world measurements. Additionally, we improve the disaster simulation by generalizing civilian disaster mobility based on a real-world field test. With a prototypical system implementation, we extensively evaluate our contributions and show the significant benefits of cooperative monitoring and topology-aware routing, respectively. We highlight the importance of continuous and integrated topology monitoring for aerial communications support and demonstrate its necessity for an adaptive and long-term disaster deployment. In conclusion, the contributions of this dissertation enable the usage of autonomous Aerial Network Assistance Systems and their adaptability in dynamic disaster scenarios
On the Principles of Evaluation for Natural Language Generation
Natural language processing is concerned with the ability of computers to understand natural language texts, which is, arguably, one of the major bottlenecks in the course of chasing the holy grail of general Artificial Intelligence. Given the unprecedented success of deep learning technology, the natural language processing community has been almost entirely in favor of practical applications with state-of-the-art systems emerging and competing for human-parity performance at an ever-increasing pace. For that reason, fair and adequate evaluation and comparison, responsible for ensuring trustworthy, reproducible and unbiased results, have fascinated the scientific community for long, not only in natural language but also in other fields. A popular example is the ISO-9126 evaluation standard for software products, which outlines a wide range of evaluation concerns, such as cost, reliability, scalability, security, and so forth. The European project EAGLES-1996, being the acclaimed extension to ISO-9126, depicted the fundamental principles specifically for evaluating natural language technologies, which underpins succeeding methodologies in the evaluation of natural language.
Natural language processing encompasses an enormous range of applications, each with its own evaluation concerns, criteria and measures. This thesis cannot hope to be comprehensive but particularly addresses the evaluation in natural language generation (NLG), which touches on, arguably, one of the most human-like natural language applications. In this context, research on quantifying day-to-day progress with evaluation metrics lays the foundation of the fast-growing NLG community. However, previous works have failed to address high-quality metrics in multiple scenarios such as evaluating long texts and when human references are not available, and, more prominently, these studies are limited in scope, given the lack of a holistic view sketched for principled NLG evaluation.
In this thesis, we aim for a holistic view of NLG evaluation from three complementary perspectives, driven by the evaluation principles in EAGLES-1996: (i) high-quality evaluation metrics, (ii) rigorous comparison of NLG systems for properly tracking the progress, and (iii) understanding evaluation metrics. To this end, we identify the current state of challenges derived from the inherent characteristics of these perspectives, and then present novel metrics, rigorous comparison approaches, and explainability techniques for metrics to address the identified issues.
We hope that our work on evaluation metrics, system comparison and explainability for metrics inspires more research towards principled NLG evaluation, and contributes to the fair and adequate evaluation and comparison in natural language processing
Robustness against adversarial attacks on deep neural networks
While deep neural networks have been successfully applied in several different domains, they exhibit vulnerabilities to artificially-crafted perturbations in data. Moreover, these perturbations have been shown to be transferable across different networks where the same perturbations can be transferred between different models. In response to this problem, many robust learning approaches have emerged. Adversarial training is regarded as a mainstream approach to enhance the robustness of deep neural networks with respect to norm-constrained perturbations. However, adversarial training requires a large number of perturbed examples (e.g., over 100,000 examples are required for MNIST dataset) trained on the deep neural networks before robustness can be considerably enhanced. This is problematic due to the large computational cost of obtaining attacks. Developing computationally effective approaches while retaining robustness against norm-constrained perturbations remains a challenge in the literature.
In this research we present two novel robust training algorithms based on Monte-Carlo Tree Search (MCTS) [1] to enhance robustness under norm-constrained perturbations [2, 3]. The first algorithm searches potential candidates with Scale Invariant Feature Transform method and makes decisions with Monte-Carlo Tree Search method [2]. The second algorithm adopts Decision Tree Search method (DTS) to accelerate the search process while maintaining efficiency [3]. Our overarching objective is to provide computationally effective approaches that can be deployed to train deep neural networks robust against perturbations in data. We illustrate the robustness with these algorithms by studying the resistances to adversarial examples obtained in the context of the MNIST and CIFAR10 datasets. For MNIST, the results showed an average training efforts saving of 21.1\% when compared to Projected Gradient Descent (PGD) and 28.3\% when compared to Fast Gradient Sign Methods (FGSM). For CIFAR10, we obtained an average improvement of efficiency of 9.8\% compared to PGD and 13.8\% compared to FGSM. The results suggest that these two methods here introduced are not only robust to norm-constrained perturbations but also efficient during training.
In regards to transferability of defences, our experiments [4] reveal that across different network architectures, across a variety of attack methods from white-box to black-box and across various datasets including MNIST and CIFAR10, our algorithms outperform other state-of-the-art methods, e.g., PGD and FGSM. Furthermore, the derived attacks and robust models obtained on our framework are reusable in the sense that the same norm-constrained perturbations can facilitate robust training across different networks. Lastly, we investigate the robustness of intra-technique and cross-technique transferability and the relations with different impact factors from adversarial strength to network capacity. The results suggest that known attacks on the resulting models are less transferable than those models trained by other state-of-the-art attack algorithms.
Our results suggest that exploiting these tree search frameworks can result in significant improvements in the robustness of deep neural networks while saving computational cost on robust training. This paves the way for several future directions, both algorithmic and theoretical, as well as numerous applications to establish the robustness of deep neural networks with increasing trust and safety.Open Acces
Remote sensing and multispectral imaging of hydrological responses to land use/land cover and climate variability in contrasting agro-ecological systems in Mountainous catchment, Western Cape
>Magister Scientiae - MScWater is a fundamental resource and key in the provision of energy, food and health. However, water resources are currently under severe pressure as a consequence of climate change and variability, population growth and economic development. Two driving factors that affect the availability of water resources are land use land cover (LULC) change and climate variability. Increasing population influences both LULC change and climate variability by inducing changes in key hydrological parameters such as interception rates, evapotranspiration (ET), run-off, surface infiltration, soil moisture, water quality and groundwater availability thereby affecting the watershed hydrology. The effects of LULC change and climate variability on hydrologic parameters have been extensively studied
Estudo do IPFS como protocolo de distribuição de conteúdos em redes veiculares
Over the last few years, vehicular ad-hoc networks (VANETs) have been the
focus of great progress due to the interest in autonomous vehicles and in
distributing content not only between vehicles, but also to the Cloud. Performing
a download/upload to/from a vehicle typically requires the existence
of a cellular connection, but the costs associated with mobile data transfers
in hundreds or thousands of vehicles quickly become prohibitive. A VANET
allows the costs to be several orders of magnitude lower - while keeping the
same large volumes of data - because it is strongly based in the communication
between vehicles (nodes of the network) and the infrastructure.
The InterPlanetary File System (IPFS) is a protocol for storing and distributing
content, where information is addressed by its content, instead of
its location. It was created in 2014 and it seeks to connect all computing
devices with the same system of files, comparable to a BitTorrent swarm
exchanging Git objects. It has been tested and deployed in wired networks,
but never in an environment where nodes have intermittent connectivity,
such as a VANET. This work focuses on understanding IPFS, how/if it can
be applied to the vehicular network context, and comparing it with other
content distribution protocols.
In this dissertation, IPFS has been tested in a small and controlled network
to understand its working applicability to VANETs. Issues such as neighbor
discoverability times and poor hashing performance have been addressed.
To compare IPFS with other protocols (such as Veniam’s proprietary solution
or BitTorrent) in a relevant way and in a large scale, an emulation platform
was created. The tests in this emulator were performed in different times of
the day, with a variable number of files and file sizes. Emulated results show
that IPFS is on par with Veniam’s custom V2V protocol built specifically for
V2V, and greatly outperforms BitTorrent regarding neighbor discoverability
and data transfers.
An analysis of IPFS’ performance in a real scenario was also conducted, using
a subset of STCP’s vehicular network in Oporto, with the support of
Veniam. Results from these tests show that IPFS can be used as a content
dissemination protocol, showing it is up to the challenge provided by a
constantly changing network topology, and achieving throughputs up to 2.8
MB/s, values similar or in some cases even better than Veniam’s proprietary
solution.Nos últimos anos, as redes veiculares (VANETs) têm sido o foco de grandes
avanços devido ao interesse em veÃculos autónomos e em distribuir conteúdos,
não só entre veÃculos mas também para a "nuvem" (Cloud). Tipicamente,
fazer um download/upload de/para um veÃculo exige a utilização
de uma ligação celular (SIM), mas os custos associados a fazer transferências
com dados móveis em centenas ou milhares de veÃculos rapidamente se
tornam proibitivos. Uma VANET permite que estes custos sejam consideravelmente
inferiores - mantendo o mesmo volume de dados - pois é fortemente
baseada na comunicação entre veÃculos (nós da rede) e a infraestrutura.
O InterPlanetary File System (IPFS - "sistema de ficheiros interplanetário")
é um protocolo de armazenamento e distribuição de conteúdos, onde a informação
é endereçada pelo conteúdo, em vez da sua localização. Foi criado
em 2014 e tem como objetivo ligar todos os dispositivos de computação num
só sistema de ficheiros, comparável a um swarm BitTorrent a trocar objetos
Git. Já foi testado e usado em redes com fios, mas nunca num ambiente
onde os nós têm conetividade intermitente, tal como numa VANET. Este
trabalho tem como foco perceber o IPFS, como/se pode ser aplicado ao
contexto de rede veicular e compará-lo a outros protocolos de distribuição
de conteúdos.
Numa primeira fase o IPFS foi testado numa pequena rede controlada, de
forma a perceber a sua aplicabilidade às VANETs, e resolver os seus primeiros
problemas como os tempos elevados de descoberta de vizinhos e o fraco desempenho
de hashing.
De modo a poder comparar o IPFS com outros protocolos (tais como a
solução proprietária da Veniam ou o BitTorrent) de forma relevante e em
grande escala, foi criada uma plataforma de emulação. Os testes neste emulador
foram efetuados usando registos de mobilidade e conetividade veicular
de alturas diferentes de um dia, com um número variável de ficheiros e
tamanhos de ficheiros. Os resultados destes testes mostram que o IPFS está
a par do protocolo V2V da Veniam (desenvolvido especificamente para V2V
e VANETs), e que o IPFS é significativamente melhor que o BitTorrent no
que toca ao tempo de descoberta de vizinhos e transferência de informação.
Uma análise do desempenho do IPFS em cenário real também foi efetuada,
usando um pequeno conjunto de nós da rede veicular da STCP no Porto,
com o apoio da Veniam. Os resultados destes testes demonstram que o
IPFS pode ser usado como protocolo de disseminação de conteúdos numa
VANET, mostrando-se adequado a uma topologia constantemente sob alteração,
e alcançando débitos até 2.8 MB/s, valores parecidos ou nalguns
casos superiores aos do protocolo proprietário da Veniam.Mestrado em Engenharia de Computadores e Telemátic
False textual information detection, a deep learning approach
Many approaches exist for analysing fact checking for fake news identification, which is the focus of this thesis. Current approaches still perform badly on a large scale due to a lack of authority, or insufficient evidence, or in certain cases reliance on a single piece of evidence.
To address the lack of evidence and the inability of models to generalise across domains, we propose a style-aware model for detecting false information and improving existing performance. We discovered that our model was effective at detecting false information when we evaluated its generalisation ability using news articles and Twitter corpora.
We then propose to improve fact checking performance by incorporating warrants. We developed a highly efficient prediction model based on the results and demonstrated that incorporating is beneficial for fact checking. Due to a lack of external warrant data, we develop a novel model for generating warrants that aid in determining the credibility of a claim. The results indicate that when a pre-trained language model is combined with a multi-agent model, high-quality, diverse warrants are generated that contribute to task performance improvement.
To resolve a biased opinion and making rational judgments, we propose a model that can generate multiple perspectives on the claim. Experiments confirm that our Perspectives Generation model allows for the generation of diverse perspectives with a higher degree of quality and diversity than any other baseline model.
Additionally, we propose to improve the model's detection capability by generating an explainable alternative factual claim assisting the reader in identifying subtle issues that result in factual errors. The examination demonstrates that it does indeed increase the veracity of the claim.
Finally, current research has focused on stance detection and fact checking separately, we propose a unified model that integrates both tasks. Classification results demonstrate that our proposed model outperforms state-of-the-art methods
Budget Travel in the Mediterranean: A Methodology for Reconstructing Ancient Journeys through Least Cost Networks
This is the final version. Available on open access from Ubiquity Press via the DOI in this recordLeast cost paths have been used extensively in the archaeological study of ancient routeways. In this paper the principal interest is less in tracing detailed paths than in modelling long-distance travel through an extensive network over land and water. We present a novel, computationally-efficient method for avoiding the direction-dependent, positive biases in least cost paths encountered in standard algorithms. A methodology for generating networks of such paths is introduced based on a trade-off between building and travel costs, minimizing the total cost. We use the Peutinger Table, an illustrated itinerarium of the Roman empire, to calibrate the parameter controlling network complexity. The problem of how to weight land versus sea travel costs in the network is tackled by comparing itineraries of Delphic theoroi of the third century BCE with solutions of the asymmetric travelling salesman problem, a classic graph theory puzzle
- …