44,380 research outputs found

    Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis

    Full text link
    We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles

    Espresso: Brewing Java For More Non-Volatility with Non-volatile Memory

    Full text link
    Fast, byte-addressable non-volatile memory (NVM) embraces both near-DRAM latency and disk-like persistence, which has generated considerable interests to revolutionize system software stack and programming models. However, it is less understood how NVM can be combined with managed runtime like Java virtual machine (JVM) to ease persistence management. This paper proposes Espresso, a holistic extension to Java and its runtime, to enable Java programmers to exploit NVM for persistence management with high performance. Espresso first provides a general persistent heap design called Persistent Java Heap (PJH) to manage persistent data as normal Java objects. The heap is then strengthened with a recoverable mechanism to provide crash consistency for heap metadata. It then provides a new abstraction called Persistent Java Object (PJO) to provide an easy-to-use but safe persistent programming model for programmers to persist application data. The evaluation confirms that Espresso significantly outperforms state-of-art NVM support for Java (i.e., JPA and PCJ) while being compatible to existing data structures in Java programs

    Enabling Strong Database Integrity using Trusted Execution Environments

    Full text link
    Many applications require the immutable and consistent sharing of data across organizational boundaries. Because conventional datastores cannot provide this functionality, blockchains have been proposed as one possible solution. Yet public blockchains are energy inefficient, hard to scale and suffer from limited throughput and high latencies, while permissioned blockchains depend on specially designated nodes, potentially leak meta-information, and also suffer from scale and performance bottlenecks. This paper presents CreDB, a datastore that provides blockchain-like guarantees of integrity using trusted execution environments. CreDB employs four novel mechanisms to support a new class of applications. First, it creates a permanent record of every transaction, known as a witness, that clients can then use not only to audit the database but to prove to third parties that desired actions took place. Second, it associates with every object an inseparable and inviolable policy, which not only performs access control but enables the datastore to implement state machines whose behavior is amenable to analysis. Third, timeline inspection allows authorized parties to inspect and reason about the history of changes made to the data. Finally, CreDB provides a protected function evaluation mechanism that allows integrity-protected computation over private data. The paper describes these mechanisms, and the applications they collectively enable, in detail. We have fully implemented a prototype of CreDB on Intel SGX. Evaluation shows that CreDB can serve as a drop-in replacement for other NoSQL stores, such as MongoDB while providing stronger integrity guarantees

    Performance and Programming Effort Trade-offs of Android Persistence Frameworks

    Full text link
    A fundamental building block of a mobile application is the ability to persist program data between different invocations. Referred to as \emph{persistence}, this functionality is commonly implemented by means of persistence frameworks. Without a clear understanding of the energy consumption, execution time, and programming effort of popular Android persistence frameworks, mobile developers lack guidelines for selecting frameworks for their applications. To bridge this knowledge gap, we report on the results of a systematic study of the performance and programming effort trade-offs of eight Android persistence frameworks, and provide practical recommendations for mobile application developers.Comment: Preprint version of Journal of Systems and Software submissio

    Optimizing Batch Linear Queries under Exact and Approximate Differential Privacy

    Full text link
    Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably \cite{LHR+10}, has suggested that with an appropriate strategy, processing a batch of correlated queries as a whole achieves considerably higher accuracy than answering them individually. However, to our knowledge there is currently no practical solution to find such a strategy for an arbitrary query batch; existing methods either return strategies of poor quality (often worse than naive methods) or require prohibitively expensive computations for even moderately large domains. Motivated by this, we propose low-rank mechanism (LRM), the first practical differentially private technique for answering batch linear queries with high accuracy. LRM works for both exact (i.e., ϵ\epsilon-) and approximate (i.e., (ϵ\epsilon, δ\delta)-) differential privacy definitions. We derive the utility guarantees of LRM, and provide guidance on how to set the privacy parameters given the user's utility expectation. Extensive experiments using real data demonstrate that our proposed method consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins.Comment: ACM Transactions on Database Systems (ACM TODS). arXiv admin note: text overlap with arXiv:1212.230

    An Algorithm for Mining High Utility Closed Itemsets and Generators

    Full text link
    Traditional association rule mining based on the support-confidence framework provides the objective measure of the rules that are of interest to users. However, it does not reflect the utility of the rules. To extract non-redundant association rules in support-confidence framework frequent closed itemsets and their generators play an important role. To extract non-redundant association rules among high utility itemsets, high utility closed itemsets (HUCI) and their generators should be extracted in order to apply traditional support-confidence framework. However, no efficient method exists at present for mining HUCIs with their generators. This paper addresses this issue. A post-processing algorithm, called the HUCI-Miner, is proposed to mine HUCIs with their generators. The proposed algorithm is implemented using both synthetic and real datasets

    Cache Serializability: Reducing Inconsistency in Edge Transactions

    Full text link
    Read-only caches are widely used in cloud infrastructures to reduce access latency and load on backend databases. Operators view coherent caches as impractical at genuinely large scale and many client-facing caches are updated in an asynchronous manner with best-effort pipelines. Existing solutions that support cache consistency are inapplicable to this scenario since they require a round trip to the database on every cache transaction. Existing incoherent cache technologies are oblivious to transactional data access, even if the backend database supports transactions. We propose T-Cache, a novel caching policy for read-only transactions in which inconsistency is tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache improves cache consistency despite asynchronous and unreliable communication between the cache and the database. We define cache-serializability, a variant of serializability that is suitable for incoherent caches, and prove that with unbounded resources T-Cache implements this new specification. With limited resources, T-Cache allows the system manager to choose a trade-off between performance and consistency. Our evaluation shows that T-Cache detects many inconsistencies with only nominal overhead. We use synthetic workloads to demonstrate the efficacy of T-Cache when data accesses are clustered and its adaptive reaction to workload changes. With workloads based on the real-world topologies, T-Cache detects 43-70% of the inconsistencies and increases the rate of consistent transactions by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability: Reducing Inconsistency in Edge Transactions," Distributed Computing Systems (ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201

    Coordination-Free Byzantine Replication with Minimal Communication Costs

    Get PDF
    State-of-the-art fault-tolerant and federated data management systems rely on fully-replicated designs in which all participants have equivalent roles. Consequently, these systems have only limited scalability and are ill-suited for high-performance data management. As an alternative, we propose a hierarchical design in which a Byzantine cluster manages data, while an arbitrary number of learners can reliable learn these updates and use the corresponding data. To realize our design, we propose the delayed-replication algorithm, an efficient solution to the Byzantine learner problem that is central to our design. The delayed-replication algorithm is coordination-free, scalable, and has minimal communication cost for all participants involved. In doing so, the delayed-broadcast algorithm opens the door to new high-performance fault-tolerant and federated data management systems. To illustrate this, we show that the delayed-replication algorithm is not only useful to support specialized learners, but can also be used to reduce the overall communication cost of permissioned blockchains and to improve their storage scalability

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    A Comparative Taxonomy and Survey of Public Cloud Infrastructure Vendors

    Full text link
    An increasing number of technology enterprises are adopting cloud-native architectures to offer their web-based products, by moving away from privately-owned data-centers and relying exclusively on cloud service providers. As a result, cloud vendors have lately increased, along with the estimated annual revenue they share. However, in the process of selecting a provider's cloud service over the competition, we observe a lack of universal common ground in terms of terminology, functionality of services and billing models. This is an important gap especially under the new reality of the industry where each cloud provider has moved towards his own service taxonomy, while the number of specialized services has grown exponentially. This work discusses cloud services offered by four dominant, in terms of their current market share, cloud vendors. We provide a taxonomy of their services and sub-services that designates major service families namely computing, storage, databases, analytics, data pipelines, machine learning, and networking. The aim of such clustering is to indicate similarities, common design approaches and functional differences of the offered services. The outcomes are essential both for individual researchers, and bigger enterprises in their attempt to identify the set of cloud services that will utterly meet their needs without compromises. While we acknowledge the fact that this is a dynamic industry, where new services arise constantly, and old ones experience important updates, this study paints a solid image of the current offerings and gives prominence to the directions that cloud service providers are following
    corecore