4,129 research outputs found

    Exploring internal child sex trafficking networks using social network analysis

    Get PDF
    This article explores the potential of social network analysis as a tool in supporting the investigation of internal child sex trafficking in the UK. In doing so, it uses only data, software, and training already available to UK police. Data from two major operations are analysed using in-built centrality metrics, designed to measure a network’s overarching structural properties and identify particularly powerful individuals. This work addresses victim networks alongside offender networks. The insights generated by SNA inform ideas for targeted interventions based on the principles of Situational Crime Prevention. These harm-reduction initiatives go beyond traditional enforcement to cover prevention, disruption, prosecution, etc. This article ends by discussing how SNA can be applied and further developed by frontline policing, strategic policing, prosecution, and policy and research

    On the Experimental Evaluation of Vehicular Networks: Issues, Requirements and Methodology Applied to a Real Use Case

    Get PDF
    One of the most challenging fields in vehicular communications has been the experimental assessment of protocols and novel technologies. Researchers usually tend to simulate vehicular scenarios and/or partially validate new contributions in the area by using constrained testbeds and carrying out minor tests. In this line, the present work reviews the issues that pioneers in the area of vehicular communications and, in general, in telematics, have to deal with if they want to perform a good evaluation campaign by real testing. The key needs for a good experimental evaluation is the use of proper software tools for gathering testing data, post-processing and generating relevant figures of merit and, finally, properly showing the most important results. For this reason, a key contribution of this paper is the presentation of an evaluation environment called AnaVANET, which covers the previous needs. By using this tool and presenting a reference case of study, a generic testing methodology is described and applied. This way, the usage of the IPv6 protocol over a vehicle-to-vehicle routing protocol, and supporting IETF-based network mobility, is tested at the same time the main features of the AnaVANET system are presented. This work contributes in laying the foundations for a proper experimental evaluation of vehicular networks and will be useful for many researchers in the area.Comment: in EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, 201

    Wearable Computing for Health and Fitness: Exploring the Relationship between Data and Human Behaviour

    Get PDF
    Health and fitness wearable technology has recently advanced, making it easier for an individual to monitor their behaviours. Previously self generated data interacts with the user to motivate positive behaviour change, but issues arise when relating this to long term mention of wearable devices. Previous studies within this area are discussed. We also consider a new approach where data is used to support instead of motivate, through monitoring and logging to encourage reflection. Based on issues highlighted, we then make recommendations on the direction in which future work could be most beneficial

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    Proactive cloud management for highly heterogeneous multi-cloud infrastructures

    Get PDF
    Various literature studies demonstrated that the cloud computing paradigm can help to improve availability and performance of applications subject to the problem of software anomalies. Indeed, the cloud resource provisioning model enables users to rapidly access new processing resources, even distributed over different geographical regions, that can be promptly used in the case of, e.g., crashes or hangs of running machines, as well as to balance the load in the case of overloaded machines. Nevertheless, managing a complex geographically-distributed cloud deploy could be a complex and time-consuming task. Autonomic Cloud Manager (ACM) Framework is an autonomic framework for supporting proactive management of applications deployed over multiple cloud regions. It uses machine learning models to predict failures of virtual machines and to proactively redirect the load to healthy machines/cloud regions. In this paper, we study different policies to perform efficient proactive load balancing across cloud regions in order to mitigate the effect of software anomalies. These policies use predictions about the mean time to failure of virtual machines. We consider the case of heterogeneous cloud regions, i.e regions with different amount of resources, and we provide an experimental assessment of these policies in the context of ACM Framework

    Mobile Computing in Digital Ecosystems: Design Issues and Challenges

    Full text link
    In this paper we argue that the set of wireless, mobile devices (e.g., portable telephones, tablet PCs, GPS navigators, media players) commonly used by human users enables the construction of what we term a digital ecosystem, i.e., an ecosystem constructed out of so-called digital organisms (see below), that can foster the development of novel distributed services. In this context, a human user equipped with his/her own mobile devices, can be though of as a digital organism (DO), a subsystem characterized by a set of peculiar features and resources it can offer to the rest of the ecosystem for use from its peer DOs. The internal organization of the DO must address issues of management of its own resources, including power consumption. Inside the DO and among DOs, peer-to-peer interaction mechanisms can be conveniently deployed to favor resource sharing and data dissemination. Throughout this paper, we show that most of the solutions and technologies needed to construct a digital ecosystem are already available. What is still missing is a framework (i.e., mechanisms, protocols, services) that can support effectively the integration and cooperation of these technologies. In addition, in the following we show that that framework can be implemented as a middleware subsystem that enables novel and ubiquitous forms of computation and communication. Finally, in order to illustrate the effectiveness of our approach, we introduce some experimental results we have obtained from preliminary implementations of (parts of) that subsystem.Comment: Proceedings of the 7th International wireless Communications and Mobile Computing conference (IWCMC-2011), Emergency Management: Communication and Computing Platforms Worksho

    Browser-based Analysis of Web Framework Applications

    Full text link
    Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330

    The Doctrine of Hell and Missions

    Get PDF
    The following texts are eBooks that are used in the course on Theology of Missions. These topics are vital for the missionary to be solidly grounded in as he seeks to invest his life in the greatest purpose known to mankind: fulfilling the Great Commission of our Savior -- building and establishing the Church through the Gospel among every people group on earth
    corecore