99,306 research outputs found
Investigation into Mobile Learning Framework in Cloud Computing Platform
Abstract—Cloud computing infrastructure is increasingly
used for distributed applications. Mobile learning
applications deployed in the cloud are a new research
direction. The applications require specific development
approaches for effective and reliable communication. This
paper proposes an interdisciplinary approach for design and
development of mobile applications in the cloud. The
approach includes front service toolkit and backend service
toolkit. The front service toolkit packages data and sends it
to a backend deployed in a cloud computing platform. The
backend service toolkit manages rules and workflow, and
then transmits required results to the front service toolkit.
To further show feasibility of the approach, the paper
introduces a case study and shows its performance
ObliviSync: Practical Oblivious File Backup and Synchronization
Oblivious RAM (ORAM) protocols are powerful techniques that hide a client's
data as well as access patterns from untrusted service providers. We present an
oblivious cloud storage system, ObliviSync, that specifically targets one of
the most widely-used personal cloud storage paradigms: synchronization and
backup services, popular examples of which are Dropbox, iCloud Drive, and
Google Drive. This setting provides a unique opportunity because the above
privacy properties can be achieved with a simpler form of ORAM called
write-only ORAM, which allows for dramatically increased efficiency compared to
related work. Our solution is asymptotically optimal and practically efficient,
with a small constant overhead of approximately 4x compared with non-private
file storage, depending only on the total data size and parameters chosen
according to the usage rate, and not on the number or size of individual files.
Our construction also offers protection against timing-channel attacks, which
has not been previously considered in ORAM protocols. We built and evaluated a
full implementation of ObliviSync that supports multiple simultaneous read-only
clients and a single concurrent read/write client whose edits automatically and
seamlessly propagate to the readers. We show that our system functions under
high work loads, with realistic file size distributions, and with small
additional latency (as compared to a baseline encrypted file system) when
paired with Dropbox as the synchronization service.Comment: 15 pages. Accepted to NDSS 201
NPLDA: A Deep Neural PLDA Model for Speaker Verification
The state-of-art approach for speaker verification consists of a neural
network based embedding extractor along with a backend generative model such as
the Probabilistic Linear Discriminant Analysis (PLDA). In this work, we propose
a neural network approach for backend modeling in speaker recognition. The
likelihood ratio score of the generative PLDA model is posed as a
discriminative similarity function and the learnable parameters of the score
function are optimized using a verification cost. The proposed model, termed as
neural PLDA (NPLDA), is initialized using the generative PLDA model parameters.
The loss function for the NPLDA model is an approximation of the minimum
detection cost function (DCF). The speaker recognition experiments using the
NPLDA model are performed on the speaker verificiation task in the VOiCES
datasets as well as the SITW challenge dataset. In these experiments, the NPLDA
model optimized using the proposed loss function improves significantly over
the state-of-art PLDA based speaker verification system.Comment: Published in Odyssey 2020, the Speaker and Language Recognition
Workshop (VOiCES Special Session). Link to GitHub Implementation:
https://github.com/iiscleap/NeuralPlda. arXiv admin note: substantial text
overlap with arXiv:2001.0703
- …
