114,505 research outputs found

    MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Get PDF
    Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data across multiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.Comment: Accepted to be published in Transactions on Collaborative Computing, 2014. arXiv admin note: substantial text overlap with arXiv:1310.405

    Fast Data in the Era of Big Data: Twitter's Real-Time Related Query Suggestion Architecture

    Full text link
    We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time "twist": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of "big data". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a "big data" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle "big" as well as "fast" data

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Integrative Dynamic Reconfiguration in a Parallel Stream Processing Engine

    Get PDF
    Load balancing, operator instance collocations and horizontal scaling are critical issues in Parallel Stream Processing Engines to achieve low data processing latency, optimized cluster utilization and minimized communication cost respectively. In previous work, these issues are typically tackled separately and independently. We argue that these problems are tightly coupled in the sense that they all need to determine the allocations of workloads and migrate computational states at runtime. Optimizing them independently would result in suboptimal solutions. Therefore, in this paper, we investigate how these three issues can be modeled as one integrated optimization problem. In particular, we first consider jobs where workload allocations have little effect on the communication cost, and model the problem of load balance as a Mixed-Integer Linear Program. Afterwards, we present an extended solution called ALBIC, which support general jobs. We implement the proposed techniques on top of Apache Storm, an open-source Parallel Stream Processing Engine. The extensive experimental results over both synthetic and real datasets show that our techniques clearly outperform existing approaches

    Synthesis report with pro-poor trade research findings and policy recommandations

    Get PDF
    The purpose of the project was to investigate international trade in fisheries products and its relationship to poverty alleviation and livelihoods of poor aquatic resource users in developing countries in Asia, and to identify options to improve the effectiveness of poverty reduction through international seafood trade. The project directly addressed the EC-PREP priority area of trade and development, and indirectly provided valuable insight to two other priority areas: food security and sustainable rural development; and institutional capacity building. [PDF contains 60 pages.

    System requirement report for Level 2 – national management institutions, for the Bureau of Fisheries and Aquatic Resources in the Philippines

    Get PDF
    This report presents the findings from a thorough literature review, workshops, and group and individual interviews conducted by STREAM in the Philippines in November and December 2003. The ambitious scope of the report combined with the limited time frame and funding available to compile it necessitated the extensive use of secondary data, including both published and unpublished material written by staff of the agencies / organisations involved, with very limited editing of material used. All possible efforts were made to generate information in participation with the government institutions responsible for managing the fisheries, and all contributors (as well as many other stakeholders) were provided with multiple opportunities to comment on the report content. The contributors are listed on the front page of the report. (Pdf contains 56 pages)
    • …
    corecore