14,306 research outputs found
SAIA: Split Artificial Intelligence Architecture for Mobile Healthcare System
As the advancement of deep learning (DL), the Internet of Things and cloud
computing techniques for biomedical and healthcare problems, mobile healthcare
systems have received unprecedented attention. Since DL techniques usually
require enormous amount of computation, most of them cannot be directly
deployed on the resource-constrained mobile and IoT devices. Hence, most of the
mobile healthcare systems leverage the cloud computing infrastructure, where
the data collected by the mobile and IoT devices would be transmitted to the
cloud computing platforms for analysis. However, in the contested environments,
relying on the cloud might not be practical at all times. For instance, the
satellite communication might be denied or disrupted. We propose SAIA, a Split
Artificial Intelligence Architecture for mobile healthcare systems. Unlike
traditional approaches for artificial intelligence (AI) which solely exploits
the computational power of the cloud server, SAIA could not only relies on the
cloud computing infrastructure while the wireless communication is available,
but also utilizes the lightweight AI solutions that work locally on the client
side, hence, it can work even when the communication is impeded. In SAIA, we
propose a meta-information based decision unit, that could tune whether a
sample captured by the client should be operated by the embedded AI (i.e.,
keeping on the client) or the networked AI (i.e., sending to the server), under
different conditions. In our experimental evaluation, extensive experiments
have been conducted on two popular healthcare datasets. Our results show that
SAIA consistently outperforms its baselines in terms of both effectiveness and
efficiency.Comment: 17 pages, 9 figures, 2 table
PASTA: A Parallel Sparse Tensor Algorithm Benchmark Suite
Tensor methods have gained increasingly attention from various applications,
including machine learning, quantum chemistry, healthcare analytics, social
network analysis, data mining, and signal processing, to name a few. Sparse
tensors and their algorithms become critical to further improve the performance
of these methods and enhance the interpretability of their output. This work
presents a sparse tensor algorithm benchmark suite (PASTA) for single- and
multi-core CPUs. To the best of our knowledge, this is the first benchmark
suite for sparse tensor world. PASTA targets on: 1) helping application users
to evaluate different computer systems using its representative computational
workloads; 2) providing insights to better utilize existed computer
architecture and systems and inspiration for the future design. This benchmark
suite is publicly released https://gitlab.com/tensorworld/pasta
Differential Privacy Techniques for Cyber Physical Systems: A Survey
Modern cyber physical systems (CPSs) has widely being used in our daily lives
because of development of information and communication technologies (ICT).With
the provision of CPSs, the security and privacy threats associated to these
systems are also increasing. Passive attacks are being used by intruders to get
access to private information of CPSs. In order to make CPSs data more secure,
certain privacy preservation strategies such as encryption, and k-anonymity
have been presented in the past. However, with the advances in CPSs
architecture, these techniques also needs certain modifications. Meanwhile,
differential privacy emerged as an efficient technique to protect CPSs data
privacy. In this paper, we present a comprehensive survey of differential
privacy techniques for CPSs. In particular, we survey the application and
implementation of differential privacy in four major applications of CPSs named
as energy systems, transportation systems, healthcare and medical systems, and
industrial Internet of things (IIoT). Furthermore, we present open issues,
challenges, and future research direction for differential privacy techniques
for CPSs. This survey can serve as basis for the development of modern
differential privacy techniques to address various problems and data privacy
scenarios of CPSs.Comment: 46 pages, 12 figure
Epione: Lightweight Contact Tracing with Strong Privacy
Contact tracing is an essential tool in containing infectious diseases such
as COVID-19. Many countries and research groups have launched or announced
mobile apps to facilitate contact tracing by recording contacts between users
with some privacy considerations. Most of the focus has been on using random
tokens, which are exchanged during encounters and stored locally on users'
phones. Prior systems allow users to search over released tokens in order to
learn if they have recently been in the proximity of a user that has since been
diagnosed with the disease. However, prior approaches do not provide end-to-end
privacy in the collection and querying of tokens. In particular, these
approaches are vulnerable to either linkage attacks by users using token
metadata, linkage attacks by the server, or false reporting by users.
In this work, we introduce Epione, a lightweight system for contact tracing
with strong privacy protections. Epione alerts users directly if any of their
contacts have been diagnosed with the disease, while protecting the privacy of
users' contacts from both central services and other users, and provides
protection against false reporting. As a key building block, we present a new
cryptographic tool for secure two-party private set intersection cardinality
(PSI-CA), which allows two parties, each holding a set of items, to learn the
intersection size of two private sets without revealing intersection items. We
specifically tailor it to the case of large-scale contact tracing where clients
have small input sets and the server's database of tokens is much larger
Efficient and Secure ECDSA Algorithm and its Applications: A Survey
Public-key cryptography algorithms, especially elliptic curve cryptography
(ECC) and elliptic curve digital signature algorithm (ECDSA) have been
attracting attention from many researchers in different institutions because
these algorithms provide security and high performance when being used in many
areas such as electronic-healthcare, electronic-banking, electronic-commerce,
electronic-vehicular, and electronic-governance. These algorithms heighten
security against various attacks and at the same time improve performance to
obtain efficiencies (time, memory, reduced computation complexity, and energy
saving) in an environment of the constrained source and large systems. This
paper presents detailed and a comprehensive survey of an update of the ECDSA
algorithm in terms of performance, security, and applications.Comment: 31 pages, 4 figure
Recommended from our members
Are Neurodynamic Organizations A Fundamental Property of Teamwork?
When performing a task it is important for teams to optimize their strategies and actions to maximize value and avoid the cost of surprise. The decisions teams make sometimes have unintended consequences and they must then reorganize their thinking, roles and/or configuration into corrective structures more appropriate for the situation. In this study we ask: What are the neurodynamic properties of these reorganizations and how do they relate to the moment-by-moment, and longer, performance-outcomes of teams?. We describe an information-organization approach for detecting and quantitating the fluctuating neurodynamic organizations in teams. Neurodynamic organization is the propensity of team members to enter into prolonged (minutes) metastable neurodynamic relationships as they encounter and resolve disturbances to their normal rhythms. Team neurodynamic organizations were detected and modeled by transforming the physical units of each team member's EEG power levels into Shannon entropy-derived information units about the team's organization and synchronization. Entropy is a measure of the variability or uncertainty of information in a data stream. This physical unit to information unit transformation bridges micro level social coordination events with macro level expert observations of team behavior allowing multimodal comparisons across the neural, cognitive and behavioral time scales of teamwork. The measures included the entropy of each team member's data stream, the overall team entropy and the mutual information between dyad pairs of the team. Mutual information can be thought of as periods related to team member synchrony. Comparisons between individual entropy and mutual information levels for the dyad combinations of three-person teams provided quantitative estimates of the proportion of a person's neurodynamic organizations that represented periods of synchrony with other team members, which in aggregate provided measures of the overall degree of neurodynamic interactions of the team. We propose that increased neurodynamic organization occurs when a team's operating rhythm can no longer support the complexity of the task and the team needs to expend energy to re-organize into structures that better minimize the "surprise" in the environment. Consistent with this hypothesis, the frequency and magnitude of neurodynamic organizations were less in experienced military and healthcare teams than they were in more junior teams. Similar dynamical properties of neurodynamic organization were observed in models of the EEG data streams of military, healthcare and high school science teams suggesting that neurodynamic organization may be a common property of teamwork. The innovation of this study is the potential it raises for developing globally applicable quantitative models of team dynamics that will allow comparisons to be made across teams, tasks and training protocols
Reward Processes and Performance Simulation in Supermarket Models with Different Servers
Supermarket models with different servers become a key in modeling resource
management of stochastic networks, such as, computer networks, manufacturing
systems and transportation networks. While these different servers always make
analysis of such a supermarket model more interesting, difficult and
challenging. This paper provides a new novel method for analyzing the
supermarket model with different servers through a multi-dimensional
continuous-time Markov reward processes. Firstly, the utility functions are
constructed for expressing a routine selection mechanism that depends on queue
lengths, on service rates, and on some probabilities of individual preference.
Then applying the continuous-time Markov reward processes, some segmented
stochastic integrals of the random reward function are established by means of
an event-driven technique. Based on this, the mean of the random reward
function in a finite time period is effectively computed by means of the state
jump points of the Markov reward process, and also the mean of the discounted
random reward function in an infinite time period can be calculated through the
same event-driven technique. Finally, some simulation experiments are given to
indicate how the expected queue length of each server depends on the main
parameters of this supermarket model.Comment: 35 pages, 4 figures in International Journal of Simulation and
Process Modelling; 201
A Conceptual Approach to Complex Model Management with Generalized Modelling Patterns and Evolutionary Identification
Complex systems' modeling and simulation are powerful ways to investigate a
multitude of natural phenomena providing extended knowledge on their structure
and behavior. However, enhanced modeling and simulation require integration of
various data and knowledge sources, models of various kinds (data-driven
models, numerical models, simulation models, etc.), intelligent components in
one composite solution. Growing complexity of such composite model leads to the
need of specific approaches for management of such model. This need extends
where the model itself becomes a complex system. One of the important aspects
of complex model management is dealing with the uncertainty of various kinds
(context, parametric, structural, input/output) to control the model. In the
situation where a system being modeled, or modeling requirements change over
time, specific methods and tools are needed to make modeling and application
procedures (meta-modeling operations) in an automatic manner. To support
automatic building and management of complex models we propose a general
evolutionary computation approach which enables managing of complexity and
uncertainty of various kinds. The approach is based on an evolutionary
investigation of model phase space to identify the best model's structure and
parameters. Examples of different areas (healthcare, hydrometeorology, social
network analysis) were elaborated with the proposed approach and solutions
Revisiting Large Scale Distributed Machine Learning
Nowadays, with the widespread of smartphones and other portable gadgets
equipped with a variety of sensors, data is ubiquitous available and the focus
of machine learning has shifted from being able to infer from small training
samples to dealing with large scale high-dimensional data. In domains such as
personal healthcare applications, which motivates this survey, distributed
machine learning is a promising line of research, both for scaling up learning
algorithms, but mostly for dealing with data which is inherently produced at
different locations. This report offers a thorough overview of and
state-of-the-art algorithms for distributed machine learning, for both
supervised and unsupervised learning, ranging from simple linear logistic
regression to graphical models and clustering. We propose future directions for
most categories, specific to the potential personal healthcare applications.
With this in mind, the report focuses on how security and low communication
overhead can be assured in the specific case of a strictly client-server
architectural model. As particular directions we provides an exhaustive
presentation of an empirical clustering algorithm, k-windows, and proposed an
asynchronous distributed machine learning algorithm that would scale well and
also would be computationally cheap and easy to implement
Patient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health Record
The wide implementation of electronic health record (EHR) systems facilitates
the collection of large-scale health data from real clinical settings. Despite
the significant increase in adoption of EHR systems, this data remains largely
unexplored, but presents a rich data source for knowledge discovery from
patient health histories in tasks such as understanding disease correlations
and predicting health outcomes. However, the heterogeneity, sparsity, noise,
and bias in this data present many complex challenges. This complexity makes it
difficult to translate potentially relevant information into machine learning
algorithms. In this paper, we propose a computational framework, Patient2Vec,
to learn an interpretable deep representation of longitudinal EHR data which is
personalized for each patient. To evaluate this approach, we apply it to the
prediction of future hospitalizations using real EHR data and compare its
predictive performance with baseline methods. Patient2Vec produces a vector
space with meaningful structure and it achieves an AUC around 0.799
outperforming baseline methods. In the end, the learned feature importance can
be visualized and interpreted at both the individual and population levels to
bring clinical insights.Comment: Accepted by IEEE Acces
- …