266,597 research outputs found
Persistent Memory Programming Abstractions in Context of Concurrent Applications
The advent of non-volatile memory (NVM) technologies like PCM, STT,
memristors and Fe-RAM is believed to enhance the system performance by getting
rid of the traditional memory hierarchy by reducing the gap between memory and
storage. This memory technology is considered to have the performance like that
of DRAM and persistence like that of disks. Thus, it would also provide
significant performance benefits for big data applications by allowing
in-memory processing of large data with the lowest latency to persistence.
Leveraging the performance benefits of this memory-centric computing technology
through traditional memory programming is not trivial and the challenges
aggravate for parallel/concurrent applications. To this end, several
programming abstractions have been proposed like NVthreads, Mnemosyne and
intel's NVML. However, deciding upon a programming abstraction which is easier
to program and at the same time ensures the consistency and balances various
software and architectural trade-offs is openly debatable and active area of
research for NVM community.
We study the NVthreads, Mnemosyne and NVML libraries by building a concurrent
and persistent set and open addressed hash-table data structure application. In
this process, we explore and report various tradeoffs and hidden costs involved
in building concurrent applications for persistence in terms of achieving
efficiency, consistency and ease of programming with these NVM programming
abstractions. Eventually, we evaluate the performance of the set and hash-table
data structure applications. We observe that NVML is easiest to program with
but is least efficient and Mnemosyne is most performance friendly but involves
significant programming efforts to build concurrent and persistent
applications.Comment: Accepted in HiPC SRS 201
A Peer-to-Peer Middleware Framework for Resilient Persistent Programming
The persistent programming systems of the 1980s offered a programming model
that integrated computation and long-term storage. In these systems, reliable
applications could be engineered without requiring the programmer to write
translation code to manage the transfer of data to and from non-volatile
storage. More importantly, it simplified the programmer's conceptual model of
an application, and avoided the many coherency problems that result from
multiple cached copies of the same information. Although technically
innovative, persistent languages were not widely adopted, perhaps due in part
to their closed-world model. Each persistent store was located on a single
host, and there were no flexible mechanisms for communication or transfer of
data between separate stores. Here we re-open the work on persistence and
combine it with modern peer-to-peer techniques in order to provide support for
orthogonal persistence in resilient and potentially long-running distributed
applications. Our vision is of an infrastructure within which an application
can be developed and distributed with minimal modification, whereupon the
application becomes resilient to certain failure modes. If a node, or the
connection to it, fails during execution of the application, the objects are
re-instantiated from distributed replicas, without their reference holders
being aware of the failure. Furthermore, we believe that this can be achieved
within a spectrum of application programmer intervention, ranging from minimal
to totally prescriptive, as desired. The same mechanisms encompass an
orthogonally persistent programming model. We outline our approach to
implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200
Recommended from our members
Comparison of Empirical Data from Two Honeynets and a Distributed Honeypot Network
In this paper we present empirical results and speculative analysis based on observations collected over a two month period from studies with two high interaction honeynets, deployed in a corporate and an SME (small to medium enterprise) environment, and a distributed honeypots deployment. All three networks contain a mixture of Windows and Linux hosts. We detail the architecture of the deployment and results of comparing the observations from the three environments. We analyze in detail the times between attacks on different hosts, operating systems, networks or geographical location. Even though results from honeynet deployments are reported often in the literature, this paper provides novel results analyzing traffic from three different types of networks and some initial exploratory models. This research aims to contribute to endeavours in the wider security research community to build methods, grounded on strong empirical work, for assessment of the robustness of computer-based systems in hostile environments
MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications
Mobile smartphones along with embedded sensors have become an efficient
enabler for various mobile applications including opportunistic sensing. The
hi-tech advances in smartphones are opening up a world of possibilities. This
paper proposes a mobile collaborative platform called MOSDEN that enables and
supports opportunistic sensing at run time. MOSDEN captures and shares sensor
data across multiple apps, smartphones and users. MOSDEN supports the emerging
trend of separating sensors from application-specific processing, storing and
sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing
the efforts in developing novel opportunistic sensing applications. MOSDEN has
been implemented on Android-based smartphones and tablets. Experimental
evaluations validate the scalability and energy efficiency of MOSDEN and its
suitability towards real world applications. The results of evaluation and
lessons learned are presented and discussed in this paper.Comment: Accepted to be published in Transactions on Collaborative Computing,
2014. arXiv admin note: substantial text overlap with arXiv:1310.405
Harvey: A Greybox Fuzzer for Smart Contracts
We present Harvey, an industrial greybox fuzzer for smart contracts, which
are programs managing accounts on a blockchain. Greybox fuzzing is a
lightweight test-generation approach that effectively detects bugs and security
vulnerabilities. However, greybox fuzzers randomly mutate program inputs to
exercise new paths; this makes it challenging to cover code that is guarded by
narrow checks, which are satisfied by no more than a few input values.
Moreover, most real-world smart contracts transition through many different
states during their lifetime, e.g., for every bid in an auction. To explore
these states and thereby detect deep vulnerabilities, a greybox fuzzer would
need to generate sequences of contract transactions, e.g., by creating bids
from multiple users, while at the same time keeping the search space and test
suite tractable. In this experience paper, we explain how Harvey alleviates
both challenges with two key fuzzing techniques and distill the main lessons
learned. First, Harvey extends standard greybox fuzzing with a method for
predicting new inputs that are more likely to cover new paths or reveal
vulnerabilities in smart contracts. Second, it fuzzes transaction sequences in
a targeted and demand-driven way. We have evaluated our approach on 27
real-world contracts. Our experiments show that the underlying techniques
significantly increase Harvey's effectiveness in achieving high coverage and
detecting vulnerabilities, in most cases orders-of-magnitude faster; they also
reveal new insights about contract code.Comment: arXiv admin note: substantial text overlap with arXiv:1807.0787
- âŠ