87,413 research outputs found
Speculative Concurrency Control for Real-Time Databases
In this paper, we propose a new class of Concurrency Control Algorithms that is especially suited for real-time database applications. Our approach relies on the use of (potentially) redundant computations to ensure that serializable schedules are found and executed as early as possible, thus, increasing the chances of a timely commitment of transactions with strict timing constraints. Due to its nature, we term our concurrency control algorithms Speculative. The aforementioned description encompasses many algorithms that we call collectively Speculative Concurrency Control (SCC) algorithms. SCC algorithms combine the advantages of both Pessimistic and Optimistic Concurrency Control (PCC and OCC) algorithms, while avoiding their disadvantages. On the one hand, SCC resembles PCC in that conflicts are detected as early as possible, thus making alternative schedules available in a timely fashion in case they are needed. On the other hand, SCC resembles OCC in that it allows conflicting transactions to proceed concurrently, thus avoiding unnecessary delays that may jeopardize their timely commitment
Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study
This paper presents Rudra, a parameter server based distributed computing
framework tuned for training large-scale deep neural networks. Using variants
of the asynchronous stochastic gradient descent algorithm we study the impact
of synchronization protocol, stale gradient updates, minibatch size, learning
rates, and number of learners on runtime performance and model accuracy. We
introduce a new learning rate modulation strategy to counter the effect of
stale gradients and propose a new synchronization protocol that can effectively
bound the staleness in gradients, improve runtime performance and achieve good
model accuracy. Our empirical investigation reveals a principled approach for
distributed training of neural networks: the mini-batch size per learner should
be reduced as more learners are added to the system to preserve the model
accuracy. We validate this approach using commonly-used image classification
benchmarks: CIFAR10 and ImageNet.Comment: Accepted by The IEEE International Conference on Data Mining 2016
(ICDM 2016
WEB service interfaces for inter-organisational business processes an infrastructure for automated reconciliation
For the majority of front-end e-business systems, the assumption of a coherent and homogeneous set of interfaces is highly unrealistic. Problems start in the back-end, with systems characterised by a heterogeneous mix of applications and business processes. Integration can be complex and expensive, as systems evolve more in accordance with business needs than with technical architectures. E-business systems are faced with the challenge to give a coherent image of a diversified reality. Web services make business interfaces more efficient, but effectiveness is a business requirement of at least comparable importance. We propose a technique for automatic reconciliation of the Web service interfaces involved in inter-organisational business processes. The working assumption is that the Web service front-end of each company is represented by a set of WSDL and WSCL interfaces. The result of our reconciliation method is a common interface that all the parties can effectively enforce. Indications are also given on ways to adapt individual interfaces to the common one. The technique was embodied in a prototype that we also present
Recommended from our members
Improving the network transmission cost of differentiated web services
This paper investigates into the transmission cost of web services related messages which is affected by network
latency. Web services enable seamless interaction and integration of e-business applications. Web services contain a
collection of operations so as to interact with outside world over the Internet through XML messaging. Though XML
effectively describe message related information and is fairly human readable, it badly affects the performance of Web
services in terms of transmission cost, processing cost, and so on. This paper aims to minimize network latency of message
communication of Web services by employing pre-emptive resume scheduling. Fundamental principle of this approach is the
provision of preferential treatment to some messages as compared to others. This approach assigns different priorities to
distinct classes of messages given the fact that some messages may tolerate longer delays than others. For instance, shorter
messages may be given higher priority than longer messages, or the Web service provider may give higher priority to the
messages of paying subscribers
On the Feasibility of Fine-Grained TLS Security Configurations in Web Browsers Based on the Requested Domain Name
Most modern web browsers today sacrifice optimal TLS security for backward
compatibility. They apply coarse-grained TLS configurations that support (by
default) legacy versions of the protocol that have known design weaknesses, and
weak ciphersuites that provide fewer security guarantees (e.g. non Forward
Secrecy), and silently fall back to them if the server selects to. This
introduces various risks including downgrade attacks such as the POODLE attack
[15] that exploits the browsers silent fallback mechanism to downgrade the
protocol version in order to exploit the legacy version flaws. To achieve a
better balance between security and backward compatibility, we propose a
mechanism for fine-grained TLS configurations in web browsers based on the
sensitivity of the domain name in the HTTPS request using a whitelisting
technique. That is, the browser enforces optimal TLS configurations for
connections going to sensitive domains while enforcing default configurations
for the rest of the connections. We demonstrate the feasibility of our proposal
by implementing a proof-of-concept as a Firefox browser extension. We envision
this mechanism as a built-in security feature in web browsers, e.g. a button
similar to the \quotes{Bookmark} button in Firefox browsers and as a
standardised HTTP header, to augment browsers security
Hybrid Session Verification through Endpoint API Generation
© Springer-Verlag Berlin Heidelberg 2016.This paper proposes a new hybrid session verification methodology for applying session types directly to mainstream languages, based on generating protocol-specific endpoint APIs from multiparty session types. The API generation promotes static type checking of the behavioural aspect of the source protocol by mapping the state space of an endpoint in the protocol to a family of channel types in the target language. This is supplemented by very light run-time checks in the generated API that enforce a linear usage discipline on instances of the channel types. The resulting hybrid verification guarantees the absence of protocol violation errors during the execution of the session. We implement our methodology for Java as an extension to the Scribble framework, and use it to specify and implement compliant clients and servers for real-world protocols such as HTTP and SMTP
Multiplex flow-through immunoassay formats for screening of mycotoxins in a variety of food matrices
Recommended from our members
A toolbox of nanobodies developed and validated for use as intrabodies and nanoscale immunolabels in mammalian brain neurons.
Nanobodies (nAbs) are small, minimal antibodies that have distinct attributes that make them uniquely suited for certain biomedical research, diagnostic and therapeutic applications. Prominent uses include as intracellular antibodies or intrabodies to bind and deliver cargo to specific proteins and/or subcellular sites within cells, and as nanoscale immunolabels for enhanced tissue penetration and improved spatial imaging resolution. Here, we report the generation and validation of nAbs against a set of proteins prominently expressed at specific subcellular sites in mammalian brain neurons. We describe a novel hierarchical validation pipeline to systematically evaluate nAbs isolated by phage display for effective and specific use as intrabodies and immunolabels in mammalian cells including brain neurons. These nAbs form part of a robust toolbox for targeting proteins with distinct and highly spatially-restricted subcellular localization in mammalian brain neurons, allowing for visualization and/or modulation of structure and function at those sites
- …