11,903 research outputs found

    HeteroCore GPU to exploit TLP-resource diversity

    Get PDF

    Predicting lorawan behavior. How machine learning can help

    Get PDF
    Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases

    Predicting lorawan behavior. How machine learning can help

    Get PDF
    Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases

    Towards an Adaptive Skeleton Framework for Performance Portability

    Get PDF
    The proliferation of widely available, but very different, parallel architectures makes the ability to deliver good parallel performance on a range of architectures, or performance portability, highly desirable. Irregularly-parallel problems, where the number and size of tasks is unpredictable, are particularly challenging and require dynamic coordination. The paper outlines a novel approach to delivering portable parallel performance for irregularly parallel programs. The approach combines declarative parallelism with JIT technology, dynamic scheduling, and dynamic transformation. We present the design of an adaptive skeleton library, with a task graph implementation, JIT trace costing, and adaptive transformations. We outline the architecture of the protoype adaptive skeleton execution framework in Pycket, describing tasks, serialisation, and the current scheduler.We report a preliminary evaluation of the prototype framework using 4 micro-benchmarks and a small case study on two NUMA servers (24 and 96 cores) and a small cluster (17 hosts, 272 cores). Key results include Pycket delivering good sequential performance e.g. almost as fast as C for some benchmarks; good absolute speedups on all architectures (up to 120 on 128 cores for sumEuler); and that the adaptive transformations do improve performance

    Coz: Finding Code that Counts with Causal Profiling

    Full text link
    Improving performance is a central concern for software developers. To locate optimization opportunities, developers rely on software profilers. However, these profilers only report where programs spent their time: optimizing that code may have no impact on performance. Past profilers thus both waste developer time and make it difficult for them to uncover significant optimization opportunities. This paper introduces causal profiling. Unlike past profiling approaches, causal profiling indicates exactly where programmers should focus their optimization efforts, and quantifies their potential impact. Causal profiling works by running performance experiments during program execution. Each experiment calculates the impact of any potential optimization by virtually speeding up code: inserting pauses that slow down all other code running concurrently. The key insight is that this slowdown has the same relative effect as running that line faster, thus "virtually" speeding it up. We present Coz, a causal profiler, which we evaluate on a range of highly-tuned applications: Memcached, SQLite, and the PARSEC benchmark suite. Coz identifies previously unknown optimization opportunities that are both significant and targeted. Guided by Coz, we improve the performance of Memcached by 9%, SQLite by 25%, and accelerate six PARSEC applications by as much as 68%; in most cases, these optimizations involve modifying under 10 lines of code.Comment: Published at SOSP 2015 (Best Paper Award

    Literacy difficulties in Higher Education:identifying students’ needs with a Hybrid Model

    Get PDF
    Aims Aims Studies on literacy difficulties have mainly focused on children or adults who have a diagnosis of dyslexia. Some students enter university without such a diagnosis, but with literacy difficulties, and this may impact their ability to become independent learners and achieve academically. This exploratory study aims to employ a hybrid model for developing profiles for such individuals. The hybrid model encompasses the causal modelling framework (CMF; Morton & Frith, 1993), the proximal and distal causes of literacy difficulties (Jackson & Coltheart, 2001) and the conceptual framework for identification of dyslexia (Reid & Came, 2009). Method In this multiple case study design, three young adults with literacy difficulties were interviewed. Using narrative analysis, we compared the cases’ responses with the responses of a matched control student without literacy difficulties. Findings The main findings of the comparison suggested that the proposed hybrid model could be an effective way to highlighting potential obstacles to learning in those with literacy difficulties and would, therefore, be an invaluable tool for educational psychologists who work in adult educational settings. Limitations This is an exploratory study based on multiple case studies. A group study with more individuals should be conducted in order to further validate the proposed hybrid model. Conclusions The current study highlights the importance of understanding the psychosocial, as well as the cognitive and biological aspects of literacy difficulties, without claiming generalisability

    Use of KAOS in operational digital forensic investigations

    Get PDF
    Abstract. This paper focuses on the operations involved in the digital forensic process using the requirements engineering framework KAOS. The idea is to enforce the claim that a requirements engineering approach to digital forensics produces reusable patterns for future incidents. Our patterns here will be opera-tion-focused, rather than requirement-focused, which is simpler because the op-erations can potentially be exhaustively enumerated and evaluated. Thus, for example, given the complexity of the Ceglia versus Zuckerberg Facebook case involving alleged document forgery, we can show that one of the benefits com-ing out of the modelling exercise was the set of operations needed. This will give an estimate for the future of what kind of capabilities and resources are needed for other complex document-forgery cases involving computers. It may also help to plan investigations and prioritise the use of resources more widely within the case workload of investigators.
    • …
    corecore