265 research outputs found
Multipartite entangled states, symmetric matrices and error-correcting codes
A pure quantum state is called -uniform if all its reductions to -qudit
are maximally mixed. We investigate the general constructions of -uniform
pure quantum states of subsystems with levels. We provide one
construction via symmetric matrices and the second one through classical
error-correcting codes. There are three main results arising from our
constructions. Firstly, we show that for any given even , there always
exists an -uniform -qudit quantum state of level for sufficiently
large prime . Secondly, both constructions show that their exist -uniform
-qudit pure quantum states such that is proportional to , i.e.,
although the construction from symmetric matrices outperforms the
one by error-correcting codes. Thirdly, our symmetric matrix construction
provides a positive answer to the open question in \cite{DA} on whether there
exists -uniform -qudit pure quantum state for all . In fact, we
can further prove that, for every , there exists a constant such that
there exists a -uniform -qudit quantum state for all . In
addition, by using concatenation of algebraic geometry codes, we give an
explicit construction of -uniform quantum state when tends to infinity
A Parallel Patient Treatment Time Prediction Algorithm and its Applications in Hospital Queuing-Recommendation in a Big Data Environment
Effective patient queue management to minimize patient wait delays and
patient overcrowding is one of the major challenges faced by hospitals.
Unnecessary and annoying waits for long periods result in substantial human
resource and time wastage and increase the frustration endured by patients. For
each patient in the queue, the total treatment time of all patients before him
is the time that he must wait. It would be convenient and preferable if the
patients could receive the most efficient treatment plan and know the predicted
waiting time through a mobile application that updates in real-time. Therefore,
we propose a Patient Treatment Time Prediction (PTTP) algorithm to predict the
waiting time for each treatment task for a patient. We use realistic patient
data from various hospitals to obtain a patient treatment time model for each
task. Based on this large-scale, realistic dataset, the treatment time for each
patient in the current queue of each task is predicted. Based on the predicted
waiting time, a Hospital Queuing-Recommendation (HQR) system is developed. HQR
calculates and predicts an efficient and convenient treatment plan recommended
for the patient. Because of the large-scale, realistic dataset and the
requirement for real-time response, the PTTP algorithm and HQR system mandate
efficiency and low-latency response. We use an Apache Spark-based cloud
implementation at the National Supercomputing Center in Changsha (NSCC) to
achieve the aforementioned goals. Extensive experimentation and simulation
results demonstrate the effectiveness and applicability of our proposed model
to recommend an effective treatment plan for patients to minimize their wait
times in hospitals
A Disease Diagnosis and Treatment Recommendation System Based on Big Data Mining and Cloud Computing
It is crucial to provide compatible treatment schemes for a disease according
to various symptoms at different stages. However, most classification methods
might be ineffective in accurately classifying a disease that holds the
characteristics of multiple treatment stages, various symptoms, and
multi-pathogenesis. Moreover, there are limited exchanges and cooperative
actions in disease diagnoses and treatments between different departments and
hospitals. Thus, when new diseases occur with atypical symptoms, inexperienced
doctors might have difficulty in identifying them promptly and accurately.
Therefore, to maximize the utilization of the advanced medical technology of
developed hospitals and the rich medical knowledge of experienced doctors, a
Disease Diagnosis and Treatment Recommendation System (DDTRS) is proposed in
this paper. First, to effectively identify disease symptoms more accurately, a
Density-Peaked Clustering Analysis (DPCA) algorithm is introduced for
disease-symptom clustering. In addition, association analyses on
Disease-Diagnosis (D-D) rules and Disease-Treatment (D-T) rules are conducted
by the Apriori algorithm separately. The appropriate diagnosis and treatment
schemes are recommended for patients and inexperienced doctors, even if they
are in a limited therapeutic environment. Moreover, to reach the goals of high
performance and low latency response, we implement a parallel solution for
DDTRS using the Apache Spark cloud platform. Extensive experimental results
demonstrate that the proposed DDTRS realizes disease-symptom clustering
effectively and derives disease treatment recommendations intelligently and
accurately
A Periodicity-based Parallel Time Series Prediction Algorithm in Cloud Computing Environments
In the era of big data, practical applications in various domains continually
generate large-scale time-series data. Among them, some data show significant
or potential periodicity characteristics, such as meteorological and financial
data. It is critical to efficiently identify the potential periodic patterns
from massive time-series data and provide accurate predictions. In this paper,
a Periodicity-based Parallel Time Series Prediction (PPTSP) algorithm for
large-scale time-series data is proposed and implemented in the Apache Spark
cloud computing environment. To effectively handle the massive historical
datasets, a Time Series Data Compression and Abstraction (TSDCA) algorithm is
presented, which can reduce the data scale as well as accurately extracting the
characteristics. Based on this, we propose a Multi-layer Time Series Periodic
Pattern Recognition (MTSPPR) algorithm using the Fourier Spectrum Analysis
(FSA) method. In addition, a Periodicity-based Time Series Prediction (PTSP)
algorithm is proposed. Data in the subsequent period are predicted based on all
previous period models, in which a time attenuation factor is introduced to
control the impact of different periods on the prediction results. Moreover, to
improve the performance of the proposed algorithms, we propose a parallel
solution on the Apache Spark platform, using the Streaming real-time computing
module. To efficiently process the large-scale time-series datasets in
distributed computing environments, Distributed Streams (DStreams) and
Resilient Distributed Datasets (RDDs) are used to store and calculate these
datasets. Extensive experimental results show that our PPTSP algorithm has
significant advantages compared with other algorithms in terms of prediction
accuracy and performance
A method of Ray digital imaging automatic Inspection on Tube to tube sheet welded joints Based on VC + +
1 The ray Inspection technology is applied for the characteristics of the tube to tube sheet welded joints, and a linear diode arrays is used as the receiving device. The system drives the digital stepper motor control card by using VC++, and the output is subdivided to drive the rotating devices to accurately rotate a week. The linear diode arrays is fixed on the rotating devices so it can accomplish circle scan and then the signals are transferred to the PC to be further transformed to images and finally display on the screen. The algorithm of transformation from square matrix to circle-scanning stitching images is particularly discussed. Finally, according to the ray digital imaging automatic detection system, the special image processing methods are introduced, which were used to optimize the image
A Bi-layered Parallel Training Architecture for Large-scale Convolutional Neural Networks
Benefitting from large-scale training datasets and the complex training
network, Convolutional Neural Networks (CNNs) are widely applied in various
fields with high accuracy. However, the training process of CNNs is very
time-consuming, where large amounts of training samples and iterative
operations are required to obtain high-quality weight parameters. In this
paper, we focus on the time-consuming training process of large-scale CNNs and
propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed
computing environments. BPT-CNN consists of two main components: (a) an
outer-layer parallel training for multiple CNN subnetworks on separate data
subsets, and (b) an inner-layer parallel training for each subnetwork. In the
outer-layer parallelism, we address critical issues of distributed and parallel
computing, including data communication, synchronization, and workload balance.
A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA)
strategy is proposed, where large-scale training datasets are partitioned and
allocated to the computing nodes in batches according to their computing power.
To minimize the synchronization waiting during the global weight update
process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In
the inner-layer parallelism, we further accelerate the training process for
each CNN subnetwork on each computer, where computation steps of convolutional
layer and the local weight training are parallelized based on task-parallelism.
We introduce task decomposition and scheduling strategies with the objectives
of thread-level load balancing and minimum waiting time for critical paths.
Extensive experimental results indicate that the proposed BPT-CNN effectively
improves the training performance of CNNs while maintaining the accuracy
Parallel Protein Community Detection in Large-scale PPI Networks Based on Multi-source Learning
Protein interactions constitute the fundamental building block of almost
every life activity. Identifying protein communities from Protein-Protein
Interaction (PPI) networks is essential to understand the principles of
cellular organization and explore the causes of various diseases. It is
critical to integrate multiple data resources to identify reliable protein
communities that have biological significance and improve the performance of
community detection methods for large-scale PPI networks. In this paper, we
propose a Multi-source Learning based Protein Community Detection (MLPCD)
algorithm by integrating Gene Expression Data (GED) and a parallel solution of
MLPCD using cloud computing technology. To effectively discover the biological
functions of proteins that participating in different cellular processes, GED
under different conditions is integrated with the original PPI network to
reconstruct a Weighted-PPI (WPPI) network. To flexibly identify protein
communities of different scales, we define community modularity and functional
cohesion measurements and detect protein communities from WPPI using an
agglomerative method. In addition, we respectively compare the detected
communities with known protein complexes and evaluate the functional enrichment
of protein function modules using Gene Ontology annotations. Moreover, we
implement a parallel version of the MLPCD algorithm on the Apache Spark
platform to enhance the performance of the algorithm for large-scale realistic
PPI networks. Extensive experimental results indicate the superiority and
notable advantages of the MLPCD algorithm over the relevant algorithms in terms
of accuracy and performance.Comment: IEEE/ACM Transactions on Computational Biology and Bioinformatics,
201
Digital Radiographic Imaging Inspection System on The Tube to Tube sheet Welding Joints of Heat Exchanger
The tube to tube sheet welding joints of heat exchanger is the most critical joints. A special Nondestructive testing technology was proposed, which is based on the digital radiographic imaging automatic inspection system to the characteristics on the tube to tube sheet welding joints of heat exchanger. A new style Linear Diode Arrays as a device is used to receive ray, which will be fixed on a automatic rotating device, and be led to its rotation of the week through the rotating device to achieve the tube to tube sheet welding joints automatic inspection, finally the whole welding joints image was performed image processing in order to generate testing results. The results showed that this inspection system can be used to quickly and conveniently perform automatic inspection on the tube to tube sheet welding joints inspection,at the same time realize digital radiographic imaging of welds
A Survey on Applications of Artificial Intelligence in Fighting Against COVID-19
The COVID-19 pandemic caused by the SARS-CoV-2 virus has spread rapidly
worldwide, leading to a global outbreak. Most governments, enterprises, and
scientific research institutions are participating in the COVID-19 struggle to
curb the spread of the pandemic. As a powerful tool against COVID-19,
artificial intelligence (AI) technologies are widely used in combating this
pandemic. In this survey, we investigate the main scope and contributions of AI
in combating COVID-19 from the aspects of disease detection and diagnosis,
virology and pathogenesis, drug and vaccine development, and epidemic and
transmission prediction. In addition, we summarize the available data and
resources that can be used for AI-based COVID-19 research. Finally, the main
challenges and potential directions of AI in fighting against COVID-19 are
discussed. Currently, AI mainly focuses on medical image inspection, genomics,
drug development, and transmission prediction, and thus AI still has great
potential in this field. This survey presents medical and AI researchers with a
comprehensive view of the existing and potential applications of AI technology
in combating COVID-19 with the goal of inspiring researches to continue to
maximize the advantages of AI and big data to fight COVID-19.Comment: This manuscript was submitted to ACM Computing Survey
Dynamic Planning of Bicycle Stations in Dockless Public Bicycle-sharing System Using Gated Graph Neural Network
Benefiting from convenient cycling and flexible parking locations, the
Dockless Public Bicycle-sharing (DL-PBS) network becomes increasingly popular
in many countries. However, redundant and low-utility stations waste public
urban space and maintenance costs of DL-PBS vendors. In this paper, we propose
a Bicycle Station Dynamic Planning (BSDP) system to dynamically provide the
optimal bicycle station layout for the DL-PBS network. The BSDP system contains
four modules: bicycle drop-off location clustering, bicycle-station graph
modeling, bicycle-station location prediction, and bicycle-station layout
recommendation. In the bicycle drop-off location clustering module, candidate
bicycle stations are clustered from each spatio-temporal subset of the
large-scale cycling trajectory records. In the bicycle-station graph modeling
module, a weighted digraph model is built based on the clustering results and
inferior stations with low station revenue and utility are filtered. Then,
graph models across time periods are combined to create a graph sequence model.
In the bicycle-station location prediction module, the GGNN model is used to
train the graph sequence data and dynamically predict bicycle stations in the
next period. In the bicycle-station layout recommendation module, the predicted
bicycle stations are fine-tuned according to the government urban management
plan, which ensures that the recommended station layout is conducive to city
management, vendor revenue, and user convenience. Experiments on actual DL-PBS
networks verify the effectiveness, accuracy and feasibility of the proposed
BSDP system
- …