3,897 research outputs found
Exploring Communities in Large Profiled Graphs
Given a graph and a vertex , the community search (CS) problem
aims to efficiently find a subgraph of whose vertices are closely related
to . Communities are prevalent in social and biological networks, and can be
used in product advertisement and social event recommendation. In this paper,
we study profiled community search (PCS), where CS is performed on a profiled
graph. This is a graph in which each vertex has labels arranged in a
hierarchical manner. Extensive experiments show that PCS can identify
communities with themes that are common to their vertices, and is more
effective than existing CS approaches. As a naive solution for PCS is highly
expensive, we have also developed a tree index, which facilitate efficient and
online solutions for PCS
MAP: Microblogging Assisted Profiling of TV Shows
Online microblogging services that have been increasingly used by people to
share and exchange information, have emerged as a promising way to profiling
multimedia contents, in a sense to provide users a socialized abstraction and
understanding of these contents. In this paper, we propose a microblogging
profiling framework, to provide a social demonstration of TV shows. Challenges
for this study lie in two folds: First, TV shows are generally offline, i.e.,
most of them are not originally from the Internet, and we need to create a
connection between these TV shows with online microblogging services; Second,
contents in a microblogging service are extremely noisy for video profiling,
and we need to strategically retrieve the most related information for the TV
show profiling.To address these challenges, we propose a MAP, a
microblogging-assisted profiling framework, with contributions as follows: i)
We propose a joint user and content retrieval scheme, which uses information
about both actors and topics of a TV show to retrieve related microblogs; ii)
We propose a social-aware profiling strategy, which profiles a video according
to not only its content, but also the social relationship of its microblogging
users and its propagation in the social network; iii) We present some
interesting analysis, based on our framework to profile real-world TV shows
Using High-Performance Computing Profilers to Understand the Performance of Graph Algorithms
An algorithm designer working with parallel computing systems should know how the characteristics of their implemented algorithm affects various performance aspects of their parallel program. It would be beneficial to these designers if each algorithm came with a specific set of standards that identified which algorithms worked better for a specified system. Therefore, the goal of this paper is to take implementations of four graphing algorithms, extract their features such as memory consumption, scalability using profilers (Vtunes /Tau) to determine which algorithms work to their fullest potential in one of the three systems: GPU, shared memory system, or distributed memory system. The features extracted in this study were scalability, speedup, and parallel efficiency. We find that when looking at various parallel algorithms: Community Detection, Communities through Directed Affiliations (Coda), BigClam, and Breadth First Search all achieved noticeable speedup with increasing number of cores
Hydrological Web Services for Operational Flood Risk Monitoring and Forecasting at Local Scale in Niger
Emerging hydrological services provide stakeholders and political authorities with useful and reliable information to support the decision-making process and develop flood risk management strategies. Most of these services adopt the paradigm of open data and standard web services, paving the way to increase distributed hydrometeorological servicesâ interoperability. Moreover,sharing of data, models, information, and the use of open-source software, greatly contributes to expanding the knowledge on flood risk and to increasing flood preparedness. Nevertheless, servicesâ interoperability and open data are not common in local systems implemented in developing countries. This paper presents the web platform and related services developed for the Local Flood Early Warning System of the Sirba River in Niger (SLAPIS) to tailor hydroclimatic information to the userâs needs, both in content and format. Building upon open-source software components and interoperable web services, we created a software framework covering data capture and storage, data flow management procedures from several data providers, real-time web publication, and service-based information dissemination. The geospatial infrastructure and web services respond to the actual and local decision-making context to improve the usability and usefulness of information derived from hydrometeorological forecasts, hydraulic models, and real-time observations. This paper presents also the results of the three years of operational campaigns for flood early warning on the Sirba River in Niger. Semiautomatic flood warnings tailored and provided to end users bridge the gap between available technology and local usersâ needs for adaptation, mitigation, and flood risk management, and make progress toward the sustainable development goals
Attacker Profiling Through Analysis of Attack Patterns in Geographically Distributed Honeypots
Honeypots are a well-known and widely used technology in the cybersecurity
community, where it is assumed that placing honeypots in different geographical
locations provides better visibility and increases effectiveness. However, how
geolocation affects the usefulness of honeypots is not well-studied, especially
for threat intelligence as early warning systems. This paper examines attack
patterns in a large public dataset of geographically distributed honeypots by
answering methodological questions and creating behavioural profiles of
attackers. Results show that the location of honeypots helps identify attack
patterns and build profiles for the attackers. We conclude that not all the
intelligence collected from geographically distributed honeypots is equally
valuable and that a good early warning system against resourceful attackers may
be built with only two distributed honeypots and a production server
National scale modelling to test UK population growth and infrastructure scenarios
This paper describes an exploratory methodology used to study the national scale issues of
population growth and infrastructure implementation across the UK. The project was carried
out for the Government Office for Science in 2015, focussing on two key questions: how could
a âspatially drivenâ scenario provoke new thinking on accommodating forecast growth, and;
what would be the impact of transport infrastructure investments within this context.
Addressing these questions required the construction of a national scale spatial model that
also needed to integrate datasets on population and employment. Models were analysed
and profiled initially to identify existing relationships between the distribution of population
and employment against the spatial network. Based on these profiles, an experimental
methodology was used to firstly identify cities with the potential to accommodate growth,
then secondly to allocate additional population proportionally. This raises important questions
for discussion around which cities provide the benchmark for growth and why, as well as what
the optimal spatial conditions for population growth may be, and how this growth should be
accommodated locally.
Later the model was used to study the impact of High Speed Rail. As these proposed
infrastructure changes improve service (capacity, frequency, journey time), rather than
creating new topological connections, the model was adapted to be able to produce time based
catchments as an output. These catchments could then be expressed in terms of the workforce
population within an hour of every city (a potential travel to work area), as well as the number
of employment opportunities within an hour of every household
Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems
The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft
- âŠ