59,597 research outputs found
Pheromone-based In-Network Processing for wireless sensor network monitoring systems
Monitoring spatio-temporal continuous fields using wireless sensor networks (WSNs) has emerged as a novel solution. An efficient data-driven routing mechanism for sensor querying and information gathering in large-scale WSNs is a challenging problem. In particular, we consider the case of how to query the sensor network information with the minimum energy cost in scenarios where a small subset of sensor nodes has relevant readings. In order to deal with this problem, we propose a Pheromone-based In-Network Processing (PhINP) mechanism. The proposal takes advantages of both a pheromone-based iterative strategy to direct queries towards nodes with relevant information and query- and response-based in-network filtering to reduce the number of active nodes. Additionally, we apply reinforcement learning to improve the performance. The main contribution of this work is the proposal of a simple and efficient mechanism for information discovery and gathering. It can reduce the messages exchanged in the network, by allowing some error, in order to maximize the network lifetime. We demonstrate by extensive simulations that using PhINP mechanism the query dissemination cost can be reduced by approximately 60% over flooding, with an error below 1%, applying the same in-network filtering strategy.Fil: Riva, Guillermo Gaston. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas, Físicas y Naturales; Argentina. Universidad Tecnológica Nacional; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; ArgentinaFil: Finochietto, Jorge Manuel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Estudios Avanzados en Ingeniería y Tecnología. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas Físicas y Naturales. Instituto de Estudios Avanzados en Ingeniería y Tecnología; Argentin
'Getting Started': pre-induction access to higher education
Abstract: The transition to higher education poses challenges on many levels. One UK University has piloted a scheme that is designed to prepare prospective students for academic study. ‘Getting Started’ gives prospective students access to the university’s virtual learning environment, prior to induction where they are invited to post queries to a discussion board moderated by a team of support staff and tutors. In 2008 the decision was made to extend the project to include a suite of learning development materials called ‘Snapshot’. This contains bite sized chunks, or ‘snapshots’, of academic practice including academic thinking, reading and writing. Each chunk of information includes an activity designed to encourage early independent, self motivated learning. These combined projects tackle the challenges of entrance into higher education for students from both traditional and non-traditional backgrounds and offer a model of good practice designed to convert offers to places and improve retention
Self-improving Algorithms for Coordinate-wise Maxima
Computing the coordinate-wise maxima of a planar point set is a classic and
well-studied problem in computational geometry. We give an algorithm for this
problem in the \emph{self-improving setting}. We have (unknown) independent
distributions \cD_1, \cD_2, ..., \cD_n of planar points. An input pointset
is generated by taking an independent sample from
each \cD_i, so the input distribution \cD is the product \prod_i \cD_i. A
self-improving algorithm repeatedly gets input sets from the distribution \cD
(which is \emph{a priori} unknown) and tries to optimize its running time for
\cD. Our algorithm uses the first few inputs to learn salient features of the
distribution, and then becomes an optimal algorithm for distribution \cD. Let
\OPT_\cD denote the expected depth of an \emph{optimal} linear comparison
tree computing the maxima for distribution \cD. Our algorithm eventually has
an expected running time of O(\text{OPT}_\cD + n), even though it did not
know \cD to begin with.
Our result requires new tools to understand linear comparison trees for
computing maxima. We show how to convert general linear comparison trees to
very restricted versions, which can then be related to the running time of our
algorithm. An interesting feature of our algorithm is an interleaved search,
where the algorithm tries to determine the likeliest point to be maximal with
minimal computation. This allows the running time to be truly optimal for the
distribution \cD.Comment: To appear in Symposium of Computational Geometry 2012 (17 pages, 2
figures
Peer to Peer Information Retrieval: An Overview
Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom
Tracking Cyber Adversaries with Adaptive Indicators of Compromise
A forensics investigation after a breach often uncovers network and host
indicators of compromise (IOCs) that can be deployed to sensors to allow early
detection of the adversary in the future. Over time, the adversary will change
tactics, techniques, and procedures (TTPs), which will also change the data
generated. If the IOCs are not kept up-to-date with the adversary's new TTPs,
the adversary will no longer be detected once all of the IOCs become invalid.
Tracking the Known (TTK) is the problem of keeping IOCs, in this case regular
expressions (regexes), up-to-date with a dynamic adversary. Our framework
solves the TTK problem in an automated, cyclic fashion to bracket a previously
discovered adversary. This tracking is accomplished through a data-driven
approach of self-adapting a given model based on its own detection
capabilities.
In our initial experiments, we found that the true positive rate (TPR) of the
adaptive solution degrades much less significantly over time than the naive
solution, suggesting that self-updating the model allows the continued
detection of positives (i.e., adversaries). The cost for this performance is in
the false positive rate (FPR), which increases over time for the adaptive
solution, but remains constant for the naive solution. However, the difference
in overall detection performance, as measured by the area under the curve
(AUC), between the two methods is negligible. This result suggests that
self-updating the model over time should be done in practice to continue to
detect known, evolving adversaries.Comment: This was presented at the 4th Annual Conf. on Computational Science &
Computational Intelligence (CSCI'17) held Dec 14-16, 2017 in Las Vegas,
Nevada, US
Tracking Cyber Adversaries with Adaptive Indicators of Compromise
A forensics investigation after a breach often uncovers network and host
indicators of compromise (IOCs) that can be deployed to sensors to allow early
detection of the adversary in the future. Over time, the adversary will change
tactics, techniques, and procedures (TTPs), which will also change the data
generated. If the IOCs are not kept up-to-date with the adversary's new TTPs,
the adversary will no longer be detected once all of the IOCs become invalid.
Tracking the Known (TTK) is the problem of keeping IOCs, in this case regular
expressions (regexes), up-to-date with a dynamic adversary. Our framework
solves the TTK problem in an automated, cyclic fashion to bracket a previously
discovered adversary. This tracking is accomplished through a data-driven
approach of self-adapting a given model based on its own detection
capabilities.
In our initial experiments, we found that the true positive rate (TPR) of the
adaptive solution degrades much less significantly over time than the naive
solution, suggesting that self-updating the model allows the continued
detection of positives (i.e., adversaries). The cost for this performance is in
the false positive rate (FPR), which increases over time for the adaptive
solution, but remains constant for the naive solution. However, the difference
in overall detection performance, as measured by the area under the curve
(AUC), between the two methods is negligible. This result suggests that
self-updating the model over time should be done in practice to continue to
detect known, evolving adversaries.Comment: This was presented at the 4th Annual Conf. on Computational Science &
Computational Intelligence (CSCI'17) held Dec 14-16, 2017 in Las Vegas,
Nevada, US
Recommended from our members
Teaching practical science online using GIS: a cautionary tale of coping strategies
Strong demand for GIS and burgeoning cohorts have encouraged the delivery of GIS teaching via online distance education models. This contribution reviews a brief foray (2012–2014) into this field by the Open University, deploying open source GIS software to enable students to perform practical science investigations online. The “Remote observation” topic spanned four science disciplines in 6 weeks – an ambitious remit within an innovative overarching module. Documenting the challenges and strategies involved, this paper uses forum usage and student feedback data to derive insights into the student experience and the pitfalls and pleasures of teaching GIS at a distance
- …