1,193 research outputs found
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks
Graph convolutional network (GCN) has been successfully applied to many
graph-based applications; however, training a large-scale GCN remains
challenging. Current SGD-based algorithms suffer from either a high
computational cost that exponentially grows with number of GCN layers, or a
large space requirement for keeping the entire graph and the embedding of each
node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm
that is suitable for SGD-based training by exploiting the graph clustering
structure. Cluster-GCN works as the following: at each step, it samples a block
of nodes that associate with a dense subgraph identified by a graph clustering
algorithm, and restricts the neighborhood search within this subgraph. This
simple but effective strategy leads to significantly improved memory and
computational efficiency while being able to achieve comparable test accuracy
with previous algorithms. To test the scalability of our algorithm, we create a
new Amazon2M data with 2 million nodes and 61 million edges which is more than
5 times larger than the previous largest publicly available dataset (Reddit).
For training a 3-layer GCN on this data, Cluster-GCN is faster than the
previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much
less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this
data, our algorithm can finish in around 36 minutes while all the existing GCN
training algorithms fail to train due to the out-of-memory issue. Furthermore,
Cluster-GCN allows us to train much deeper GCN without much time and memory
overhead, which leads to improved prediction accuracy---using a 5-layer
Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI
dataset, while the previous best result was 98.71 by [16]. Our codes are
publicly available at
https://github.com/google-research/google-research/tree/master/cluster_gcn.Comment: In Proceedings of the 25th ACM SIGKDD International Conference on
Knowledge Discovery & Data Mining (KDD'19
Capulet and Slingshot share overlapping functions during Drosophila eye morphogenesis
BACKGROUND: CAP/Capulet (Capt), Slingshot (Ssh) and Cofilin/Twinstar (Tsr) are actin-binding proteins that restrict actin polymerization. Previously, it was shown that low resolution analyses of loss-of-function mutations in capt, ssh and tsr all show ectopic F-actin accumulation in various Drosophila tissues. In contrast, RNAi depletion of capt, tsr and ssh in Drosophila S2 cells all affect actin-based lamella formation differently. Whether loss of these three related genes might cause the same effect in the same tissue remains unclear. METHODS: Loss-of-function mutant clones were generated using the MARCM or EGUF system whereas overexpression clones were generated using the Flip-out system. Immunostaining were then performed in eye imaginal discs with clones. FRAP was performed in cultured eye discs. RESULTS: Here, we compared their loss-of-function phenotype at single-cell resolution, using a sheet of epithelial cells in the Drosophila eye imaginal disc as a model system. Surprisingly, we found that capt and ssh, but not tsr, mutant cells within and posterior to the morphogenetic furrow (MF) shared similar phenotypes. The capt/ssh mutant cells possessed: (1) hexagonal cell packing with discontinuous adherens junctions; and (2) largely complementary accumulation of excessive phosphorylated myosin light chain (p-MLC) and F-actin rings at the apical cortex. We further showed that the capt/ssh mutant phenotypes depended on the inactivation of protein kinase A (PKA) and activation of Rho. CONCLUSIONS: Although Capt, Ssh and Tsr were reported to negatively regulate actin polymerization, we found that Capt and Ssh, but not Tsr, share overlapping functions during eye morphogenesis
Analyzing Tropical Waves Using the Parallel Ensemble Empirical Model Decomposition Method: Preliminary Results from Hurricane Sandy
In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation
Traditional Chinese Medicine ZHENG Identification Provides a Novel Stratification Approach in Patients with Allergic Rhinitis
Background. We aimed to apply the ZHENG identification to provide an easy and useful tool to stratify the patients with allergic rhinitis (AR) through exploring the correlation between the quantified scores of AR symptoms and the TCM ZHENGs. Methods. A total of 114 AR patients were enrolled in this observational study. All participants received the examinations of anterior rhinoscopy and acoustic rhinometry. Their blood samples were collected for measurement of total serum immunoglobulin E (IgE), blood eosinophil count (Eos), and serum eosinophil cationic protein (ECP). They also received two questionnaire to assess the severity scores of AR symptoms and quantified TCM ZHENG scores. Multiple linear regression analysis was used to determine explanatory factors for the score of AR manifestations. Results. IgE and ECP level, duration of AR, the 2 derived TCMZHENG scores of âYin-Xu â Yang-Xuâ, and âQi-Xu + Blood-Xuâ were 5 explanatory variables to predict the severity scores of AR symptoms. The patients who had higher scores of âYin-Xu â Yang-Xuâ or âQi-Xu + Blood-Xuâ tended to manifest as âsneezer and runnerâ or âblockers,â respectively. Conclusions. The TCM ZHENG scores correlated with the severity scores of AR symptoms and provided an easy and useful tool to stratify the AR patients
A Fuzzy-based Dynamic Channel Allocation
[[abstract]]In traditional wireless networks, fixed allocation of spectrum is one of the main reason causing low utilization of spectrum. In order to solve this problem, a new wireless communication model has been proposed, which called Cognitive Radio Networks (CRN). CRN adopts Dynamic Spectrum Access (DSA) technology, thus it can flexibly use the spectrum which primary user temporarily unused. In cognitive radio networks, due to each secondary user (SU) has different location and surrounding spectrum environment, it may have variety of available channels. How to assign these available channels is the crucial point of system performance. However, existing methods doesnât consider the problem of multipath fading; therefore, this study proposed an improved channel allocation scheme. We consider the received signal strength to define the channel access priority of secondary users applied by fuzzy theory. Finally, the simulation results show the superior of our approach and verify the effectiveness of the proposed scheme.[[sponsorship]]University of Colombo School of Computing, Sri Lanka[[incitationindex]]EI[[conferencetype]]ĺé[[conferencedate]]20150823~20150826[[booktype]]éťĺç[[iscallforpapers]]Y[[conferencelocation]]Colombo, Sri Lank
The Study on Antecedents of Consumer Buying Impulsiveness in an Online Context
The global recession caused by the financial tsunami has seriously impacted numerous industries. Although the market scale of global e-commerce market has declined, global online shopping continues to grow. Many previous researches focused on the effect of website design characteristics on online impulsive buying behavior, and few have explored such behavior from consumer individual internal factor perspectives. This paper aims to explore and integrate individual internal factors influencing consumer online buying impulsiveness, and further to recognize the relationships among these factors. The results showed as follows: (1) hedonic consumption needs, impulsive buying tendency, positive affect and normative evaluations positively influence buying impulsiveness, respectively; (2) hedonic consumption needs positively influence positive affect; (3) impulsive buying tendency positively influences normative evaluations; (4) normative evaluations positively influence positive affect
An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling
Masked visual modeling (MVM) has been recently proven effective for visual
pre-training. While similar reconstructive objectives on video inputs (e.g.,
masked frame modeling) have been explored in video-language (VidL)
pre-training, previous studies fail to find a truly effective MVM strategy that
can largely benefit the downstream performance. In this work, we systematically
examine the potential of MVM in the context of VidL learning. Specifically, we
base our study on a fully end-to-end VIdeO-LanguagE Transformer (VIOLET), where
the supervision from MVM training can be backpropagated to the video pixel
space. In total, eight different reconstructive targets of MVM are explored,
from low-level pixel values and oriented gradients to high-level depth maps,
optical flow, discrete visual tokens, and latent visual features. We conduct
comprehensive experiments and provide insights into the factors leading to
effective MVM training, resulting in an enhanced model VIOLETv2. Empirically,
we show VIOLETv2 pre-trained with MVM objective achieves notable improvements
on 13 VidL benchmarks, ranging from video question answering, video captioning,
to text-to-video retrieval.Comment: CVPR'23; the first two authors contributed equally; code is available
at https://github.com/tsujuifu/pytorch_empirical-mv
- âŚ