926 research outputs found
Sybil attacks against mobile users: friends and foes to the rescue
Collaborative applications for co-located mobile
users can be severely disrupted by a sybil attack to the point of
being unusable. Existing decentralized defences have largely been
designed for peer-to-peer networks but not for mobile networks.
That is why we propose a new decentralized defence for portable
devices and call it MobID. The idea is that a device manages two
small networks in which it stores information about the devices
it meets: its network of friends contains honest devices, and its
network of foes contains suspicious devices. By reasoning on these
two networks, the device is then able to determine whether
an unknown individual is carrying out a sybil attack or not.
We evaluate the extent to which MobID reduces the number
of interactions with sybil attackers and consequently enables
collaborative applications.We do so using real mobility and social
network data. We also assess computational and communication
costs of MobID on mobile phones
StakeSource: harnessing the power of crowdsourcing and social networks in stakeholder analysis
Projects often fail because they overlook stakeholders. Unfortunately, existing stakeholder analysis tools only capture stakeholders' information, relying on experts to manually identify them. StakeSource is a web-based tool that automates stakeholder analysis. It "crowdsources" the stakeholders themselves for recommendations about other stakeholders and aggregates their answers using social network analysis
TRULLO - local trust bootstrapping for ubiquitous devices
Handheld devices have become sufficiently powerful
that it is easy to create, disseminate, and access digital content
(e.g., photos, videos) using them. The volume of such content is
growing rapidly and, from the perspective of each user, selecting
relevant content is key. To this end, each user may run a trust
model - a software agent that keeps track of who disseminates
content that its user finds relevant. This agent does so by
assigning an initial trust value to each producer for a specific
category (context); then, whenever it receives new content, the
agent rates the content and accordingly updates its trust value for
the producer in the content category. However, a problem with
such an approach is that, as the number of content categories
increases, so does the number of trust values to be initially set.
This paper focuses on how to effectively set initial trust values.
The most sophisticated of the current solutions employ predefined
context ontologies, using which initial trust in a given
context is set based on that already held in similar contexts.
However, universally accepted (and time invariant) ontologies
are rarely found in practice. For this reason, we propose a
mechanism called TRULLO (TRUst bootstrapping by Latently
Lifting cOntext) that assigns initial trust values based only on
local information (on the ratings of its user’s past experiences)
and that, as such, does not rely on third-party recommendations.
We evaluate the effectiveness of TRULLO by simulating its use
in an informal antique market setting. We also evaluate the
computational cost of a J2ME implementation of TRULLO on
a mobile phone
StakeNet: using social networks to analyse the stakeholders of large-scale software projects
Many software projects fail because they overlook stakeholders or involve the wrong representatives of significant groups.
Unfortunately, existing methods in stakeholder analysis are
likely to omit stakeholders, and consider all stakeholders as equally influential. To identify and prioritise stakeholders, we have developed StakeNet, which consists of three main steps: identify stakeholders and ask them to recommend other stakeholders and stakeholder roles, build a social network whose nodes are stakeholders and links are recommendations, and prioritise stakeholders using a variety of social network measures. To evaluate StakeNet, we conducted one of the first empirical studies of requirements stakeholders on a software project for a 30,000-user system. Using the data
collected from surveying and interviewing 68 stakeholders,
we show that StakeNet identifies stakeholders and their roles with high recall, and accurately prioritises them. StakeNet uncovers a critical stakeholder role overlooked in the project, whose omission significantly impacted project success
Ocular hypertension in myopia: analysis of contrast sensitivity
Purpose: we evaluated the evolution of contrast sensitivity reduction in patients affected by ocular hypertension and glaucoma, with low to moderate myopia. We also evaluated the relationship between contrast sensitivity and mean deviation of visual field.
Material and methods: 158 patients (316 eyes), aged between 38 and 57 years old, were enrolled and divided into 4 groups: emmetropes, myopes, myopes with ocular hypertension (IOP≥21 ±2 mmHg), myopes with glaucoma. All patients underwent anamnestic and complete eye evaluation, tonometric curves with Goldmann’s applanation tonometer, cup/disc ratio evaluation, gonioscopy by Goldmann’s three-mirrors lens, automated perimetry (Humphrey 30-2 full-threshold test) and contrast sensitivity evaluation by Pelli-Robson charts. A contrast sensitivity under 1,8 Logarithm of the Minimum Angle of Resolution (LogMAR) was considered
abnormal.
Results: contrast sensitivity was reduced in the group of myopes with ocular hypertension (1,788 LogMAR) and in the group of myopes with glaucoma (1,743 LogMAR), while it was preserved in the group of myopes (2,069 LogMAR) and in the group of emmetropes (1,990 LogMAR). We also found a strong correlation between contrast sensitivity reduction and mean deviation of visual fields in myopes with glaucoma (coefficient relation = 0.86) and in myopes with ocular hypertension (coefficient relation = 0.78).
Conclusions: the contrast sensitivity assessment performed by the Pelli-Robson test should be performed in all patients with middle-grade myopia, ocular hypertension and optic disc suspected for glaucoma, as it may be useful in the early diagnosis of the disease.
Introduction Contrast can be defined as the ability of the eye to discriminate differences in luminance between the stimulus and the background.
The sensitivity to contrast is represented by the inverse of the minimal contrast necessary to make an object visible; the lower the
contrast the greater the sensitivity, and the other way around.
Contrast sensitivity is a fundamental aspect of vision together with visual acuity: the latter defines the smallest spatial detail that the subject manages to discriminate under optimal conditions, but it only provides information about the size of the stimulus that the eye is capable to perceive; instead, the evaluation of contrast sensitivity provides information not obtainable with only the measurement of visual acuity, as it establishes the minimum difference in luminance that must be present between the stimulus and its background so that the retina is adequately stimulated to perceive the stimulus itself. The clinical methods of examining contrast sensitivity (lattices,
luminance gradients, variable-contrast optotypic tables and lowcontrast optotypic tables) relate the two parameters on which the
ability to distinctly perceive an object depends, namely the different luminance degree of the two adjacent areas and the spatial frequency,
which is linked to the size of the object.
The measurement of contrast sensitivity becomes valuable in the diagnosis and follow up of some important eye conditions such as
glaucoma. Studies show that contrast sensitivity can be related to data obtained with the visual perimetry, especially with the perimetric
damage of the central area and of the optic nerve head
The Digital Life of Walkable Streets
Walkability has many health, environmental, and economic benefits. That is
why web and mobile services have been offering ways of computing walkability
scores of individual street segments. Those scores are generally computed from
survey data and manual counting (of even trees). However, that is costly, owing
to the high time, effort, and financial costs. To partly automate the
computation of those scores, we explore the possibility of using the social
media data of Flickr and Foursquare to automatically identify safe and walkable
streets. We find that unsafe streets tend to be photographed during the day,
while walkable streets are tagged with walkability-related keywords. These
results open up practical opportunities (for, e.g., room booking services,
urban route recommenders, and real-estate sites) and have theoretical
implications for researchers who might resort to the use social media data to
tackle previously unanswered questions in the area of walkability.Comment: 10 pages, 7 figures, Proceedings of International World Wide Web
Conference (WWW 2015
Online Popularity and Topical Interests through the Lens of Instagram
Online socio-technical systems can be studied as proxy of the real world to
investigate human behavior and social interactions at scale. Here we focus on
Instagram, a media-sharing online platform whose popularity has been rising up
to gathering hundred millions users. Instagram exhibits a mixture of features
including social structure, social tagging and media sharing. The network of
social interactions among users models various dynamics including
follower/followee relations and users' communication by means of
posts/comments. Users can upload and tag media such as photos and pictures, and
they can "like" and comment each piece of information on the platform. In this
work we investigate three major aspects on our Instagram dataset: (i) the
structural characteristics of its network of heterogeneous interactions, to
unveil the emergence of self organization and topically-induced community
structure; (ii) the dynamics of content production and consumption, to
understand how global trends and popular users emerge; (iii) the behavior of
users labeling media with tags, to determine how they devote their attention
and to explore the variety of their topical interests. Our analysis provides
clues to understand human behavior dynamics on socio-technical systems,
specifically users and content popularity, the mechanisms of users'
interactions in online environments and how collective trends emerge from
individuals' topical interests.Comment: 11 pages, 11 figures, Proceedings of ACM Hypertext 201
City form and well-being: what makes London neighborhoods good places to live?
What is the relationship between urban form and citizens’ well-being? In this paper, we propose a quantitative approach to help answer this question, inspired by theories developed within the fields of architecture and population health. The method extracts a rich set of metrics of urban form and well-being from openly accessible datasets. Using linear regression analysis, we identify a model which can explain 30% of the variance of well-being when applied to Greater London, UK. Outcomes of this research can inform the discussion on how to design cities which foster the wellbeing of their residents
The architecture of innovation: Tracking face-to-face interactions with UbiComp technologies
The layouts of the buildings we live in shape our everyday lives. In office
environments, building spaces affect employees' communication, which is crucial
for productivity and innovation. However, accurate measurement of how spatial
layouts affect interactions is a major challenge and traditional techniques may
not give an objective view.We measure the impact of building spaces on social
interactions using wearable sensing devices. We study a single organization
that moved between two different buildings, affording a unique opportunity to
examine how space alone can affect interactions. The analysis is based on two
large scale deployments of wireless sensing technologies: short-range,
lightweight RFID tags capable of detecting face-to-face interactions. We
analyze the traces to study the impact of the building change on social
behavior, which represents a first example of using ubiquitous sensing
technology to study how the physical design of two workplaces combines with
organizational structure to shape contact patterns.This is the author accepted manuscript. The final version is available at http://dl.acm.org/citation.cfm?id=2632056&CFID=528294814&CFTOKEN=36484024
Trust models for mobile content-sharing applications
Using recent technologies such as Bluetooth, mobile users can share digital content (e.g., photos, videos)
with other users in proximity. However, to reduce the cognitive load on mobile users, it is important that
only appropriate content is stored and presented to them.
This dissertation examines the feasibility of having mobile users filter out irrelevant content by running
trust models. A trust model is a piece of software that keeps track of which devices are trusted (for
sending quality content) and which are not. Unfortunately, existing trust models are not fit for purpose.
Specifically, they lack the ability to: (1) reason about ratings other than binary ratings in a formal way;
(2) rely on the trustworthiness of stored third-party recommendations; (3) aggregate recommendations
to make accurate predictions of whom to trust; and (4) reason across categories without resorting to
ontologies that are shared by all users in the system.
We overcome these shortcomings by designing and evaluating algorithms and protocols with which
portable devices are able automatically to maintain information about the reputability of sources of
content and to learn from each other’s recommendations. More specifically, our contributions are:
1. An algorithm that formally reasons on generic (not necessarily binary) ratings using Bayes’ theorem.
2. A set of security protocols with which devices store ratings in (local) tamper-evident tables and
are able to check the integrity of those tables through a gossiping protocol.
3. An algorithm that arranges recommendations in a “Web of Trust” and that makes predictions of
trustworthiness that are more accurate than existing approaches by using graph-based learning.
4. An algorithm that learns the similarity between any two categories by extracting similarities between
the two categories’ ratings rather than by requiring a universal ontology. It does so automatically
by using Singular Value Decomposition.
We combine these algorithms and protocols and, using real-world mobility and social network data,
we evaluate the effectiveness of our proposal in allowing mobile users to select reputable sources of
content. We further examine the feasibility of implementing our proposal on current mobile phones by
examining the storage and computational overhead it entails. We conclude that our proposal is both
feasible to implement and performs better across a range of parameters than a number of current alternatives
- …
