4,917 research outputs found
Quantifying Irregular Geographic Exposure on the Internet
In this work, we examine to what extent the Internet\u27s routing infrastructure needlessly exposes network traffic to nations geographically irrelevant to packet transmission. We quantify what countries are geographically logical to see on a network path traveling between two nations through the use of convex hulls circumscribing major population centers, and then compare that to the nation states observed in over 14.5 billion measured paths. Our results show that 49% of paths unnecessarily expose traffic to at least one nation. We further explore what nations, regions, and ASes expose and benefit from this geographically illogical traffic. As an example, we see that 23% of source/destination pairs located outside of the United States send their traffic through the US, but only 8% of those paths are geographically logical. Finally, we examine what happens when countries exercise both legal and physical control over ASes transiting traffic, gaining access to traffic outside of their geographic borders, but carried by organizations that fall under a particular country\u27s legal jurisdiction. When considering both the physical and legal countries that a path traverses, our results show that over 57% of paths expose traffic to a geographically irrelevant country
Data Leak Detection As a Service: Challenges and Solutions
We describe a network-based data-leak detection (DLD)
technique, the main feature of which is that the detection
does not require the data owner to reveal the content of the
sensitive data. Instead, only a small amount of specialized
digests are needed. Our technique – referred to as the fuzzy
fingerprint – can be used to detect accidental data leaks due
to human errors or application flaws. The privacy-preserving
feature of our algorithms minimizes the exposure of sensitive
data and enables the data owner to safely delegate the
detection to others.We describe how cloud providers can offer
their customers data-leak detection as an add-on service
with strong privacy guarantees.
We perform extensive experimental evaluation on the privacy,
efficiency, accuracy and noise tolerance of our techniques.
Our evaluation results under various data-leak scenarios
and setups show that our method can support accurate
detection with very small number of false alarms, even
when the presentation of the data has been transformed. It
also indicates that the detection accuracy does not degrade
when partial digests are used. We further provide a quantifiable
method to measure the privacy guarantee offered by our
fuzzy fingerprint framework
Information Gathering in Networks via Active Exploration
How should we gather information in a network, where each node's visibility
is limited to its local neighborhood? This problem arises in numerous
real-world applications, such as surveying and task routing in social networks,
team formation in collaborative networks and experimental design with
dependency constraints. Often the informativeness of a set of nodes can be
quantified via a submodular utility function. Existing approaches for
submodular optimization, however, require that the set of all nodes that can be
selected is known ahead of time, which is often unrealistic. In contrast, we
propose a novel model where we start our exploration from an initial node, and
new nodes become visible and available for selection only once one of their
neighbors has been chosen. We then present a general algorithm NetExp for this
problem, and provide theoretical bounds on its performance dependent on
structural properties of the underlying network. We evaluate our methodology on
various simulated problem instances as well as on data collected from social
question answering system deployed within a large enterprise.Comment: Longer version of IJCAI'15 pape
Net:Geography Fieldwork Frequently Asked Questions
Abstract included in text
Energy-efficient task allocation for distributed applications in Wireless Sensor Networks
We consider the scenario of a sensing, computing and communicating infrastructure with a a programmable middleware that allows for quickly deploying different applications running on top of it so as to follow the changing ambient needs. We then face the problem of setting up the desired application in case of hundreds of nodes, which consists in identifying which actions should be performed by each of the nodes so as to satisfy the ambient needs while minimizing the application impact on the infrastructure battery lifetime. We approach the problem by considering every possible decomposition of the application's sensing and computing operations into tasks to be assigned to the each infrastructure component. The contribution of energy consumption due to the performance of each task is then considered to compute a cost function, allowing us to evaluate the viability of each deployment solution. Simulation results show that our framework results in considerable energy conservation with respect to sink-oriented or cluster-oriented deployment approaches, particularly for networks with high node densities, non-uniform energy consumption and initial energy, and complex actions
Distributions of Human Exposure to Ozone During Commuting Hours in Connecticut using the Cellular Device Network
Epidemiologic studies have established associations between various air
pollutants and adverse health outcomes for adults and children. Due to high
costs of monitoring air pollutant concentrations for subjects enrolled in a
study, statisticians predict exposure concentrations from spatial models that
are developed using concentrations monitored at a few sites. In the absence of
detailed information on when and where subjects move during the study window,
researchers typically assume that the subjects spend their entire day at home,
school or work. This assumption can potentially lead to large exposure
assignment bias. In this study, we aim to determine the distribution of the
exposure assignment bias for an air pollutant (ozone) when subjects are assumed
to be static as compared to accounting for individual mobility. To achieve this
goal, we use cell-phone mobility data on approximately 400,000 users in the
state of Connecticut during a week in July, 2016, in conjunction with an ozone
pollution model, and compare individual ozone exposure assuming static versus
mobile scenarios. Our results show that exposure models not taking mobility
into account often provide poor estimates of individuals commuting into and out
of urban areas: the average 8-hour maximum difference between these estimates
can exceed 80 parts per billion (ppb). However, for most of the population, the
difference in exposure assignment between the two models is small, thereby
validating many current epidemiologic studies focusing on exposure to ozone
Attack-Surface Metrics, OSSTMM and Common Criteria Based Approach to “Composable Security” in Complex Systems
In recent studies on Complex Systems and Systems-of-Systems theory, a huge effort has been put to cope with behavioral problems, i.e. the possibility of controlling a desired overall or end-to-end behavior by acting on the individual elements that constitute the system itself. This problem is particularly important in the “SMART” environments, where the huge number of devices, their significant computational capabilities as well as their tight interconnection produce a complex architecture for which it is difficult to predict (and control) a desired behavior; furthermore, if the scenario is allowed to dynamically evolve through the modification of both topology and subsystems composition, then the control problem becomes a real challenge. In this perspective, the purpose of this paper is to cope with a specific class of control problems in complex systems, the “composability of security functionalities”, recently introduced by the European Funded research through the pSHIELD and nSHIELD projects (ARTEMIS-JU programme). In a nutshell, the objective of this research is to define a control framework that, given a target security level for a specific application scenario, is able to i) discover the system elements, ii) quantify the security level of each element as well as its contribution to the security of the overall system, and iii) compute the control action to be applied on such elements to reach the security target. The main innovations proposed by the authors are: i) the definition of a comprehensive methodology to quantify the security of a generic system independently from the technology and the environment and ii) the integration of the derived metrics into a closed-loop scheme that allows real-time control of the system. The solution described in this work moves from the proof-of-concepts performed in the early phase of the pSHIELD research and enrich es it through an innovative metric with a sound foundation, able to potentially cope with any kind of pplication scenarios (railways, automotive, manufacturing, ...)
- …