13 research outputs found

    Co-attention Graph Pooling for Efficient Pairwise Graph Interaction Learning

    Full text link
    Graph Neural Networks (GNNs) have proven to be effective in processing and learning from graph-structured data. However, previous works mainly focused on understanding single graph inputs while many real-world applications require pair-wise analysis for graph-structured data (e.g., scene graph matching, code searching, and drug-drug interaction prediction). To this end, recent works have shifted their focus to learning the interaction between pairs of graphs. Despite their improved performance, these works were still limited in that the interactions were considered at the node-level, resulting in high computational costs and suboptimal performance. To address this issue, we propose a novel and efficient graph-level approach for extracting interaction representations using co-attention in graph pooling. Our method, Co-Attention Graph Pooling (CAGPool), exhibits competitive performance relative to existing methods in both classification and regression tasks using real-world datasets, while maintaining lower computational complexity.Comment: Published at IEEE Acces

    DeepCompass: AI-driven Location-Orientation Synchronization for Navigating Platforms

    Full text link
    In current navigating platforms, the user's orientation is typically estimated based on the difference between two consecutive locations. In other words, the orientation cannot be identified until the second location is taken. This asynchronous location-orientation identification often leads to our real-life question: Why does my navigator tell the wrong direction of my car at the beginning? We propose DeepCompass to identify the user's orientation by bridging the gap between the street-view and the user-view images. First, we explore suitable model architectures and design corresponding input configuration. Second, we demonstrate artificial transformation techniques (e.g., style transfer and road segmentation) to minimize the disparity between the street-view and the user's real-time experience. We evaluate DeepCompass with extensive evaluation in various driving conditions. DeepCompass does not require additional hardware and is also not susceptible to external interference, in contrast to magnetometer-based navigator. This highlights the potential of DeepCompass as an add-on to existing sensor-based orientation detection methods.Comment: 7page with 3 supplemental page

    Crowdsourced mapping of unexplored target space of kinase inhibitors

    Get PDF
    Despite decades of intensive search for compounds that modulate the activity of particular protein targets, a large proportion of the human kinome remains as yet undrugged. Effective approaches are therefore required to map the massive space of unexplored compound-kinase interactions for novel and potent activities. Here, we carry out a crowdsourced benchmarking of predictive algorithms for kinase inhibitor potencies across multiple kinase families tested on unpublished bioactivity data. We find the top-performing predictions are based on various models, including kernel learning, gradient boosting and deep learning, and their ensemble leads to a predictive accuracy exceeding that of single-dose kinase activity assays. We design experiments based on the model predictions and identify unexpected activities even for under-studied kinases, thereby accelerating experimental mapping efforts. The open-source prediction algorithms together with the bioactivities between 95 compounds and 295 kinases provide a resource for benchmarking prediction algorithms and for extending the druggable kinome. The IDG-DREAM Challenge carried out crowdsourced benchmarking of predictive algorithms for kinase inhibitor activities on unpublished data. This study provides a resource to compare emerging algorithms and prioritize new kinase activities to accelerate drug discovery and repurposing efforts

    AI-driven Family Interaction Over Melded Space and Time

    No full text
    Computer-mediated interaction services connect people over a distance. However, we address that those people are often “locked in a frame”—which includes an interaction mode, a point in time, or a context of either person. We observe that such lock-ins make it difficult to shape the interaction to be mutually symmetric. In this article, we propose a semantic-equivalent melding of space and time to provide a new form of empathetic interaction. We present HomeMeld and MomentMeld, which aim to meld space and time by applying AI, respectively. HomeMeld provides a sense of living together to a family living apart with AI-driven autonomous robotic avatars. MomentMeld utilizes an ensemble of visual AI to create interaction topics by matching semantic-equivalent photos. In-the-wild experiments reveal that HomeMeld and MomentMeld open new possibilities for empathetic interaction. Finally, we introduce a new interaction service leveraging technical synergy of HomeMeld and MomentMeld.11Nsciescopu

    MomentMeld: AI-augmented Mobile Photographic Memento towards Mutually Stimulatory Inter-generational Interaction

    No full text
    Aging often comes with declining social interaction, a known adversarial factor impacting the life satisfaction of senior population. Such decline appears even in family–a permanent social circle, as their adult children eventually go independent. We present MomentMeld, an AI-powered, cloud-backed mobile application that blends with everyday routine and naturally encourages rich and frequent inter-generational interactions in a family, especially those between the senior generation and their adult children. Firstly, we design a photographic interaction aid called mutually stimulatory memento, which is a cross-generational juxtaposition of semantically related photos to bring natural arousal of context-specific inter-generational empathy and reminiscence. Secondly, we build comprehensive ensemble AI models consisting of various deep neural networks and a runtime system that automates the creation of mutually stimulatory memento on top of the user’s usual photo-taking routines. We deploy MomentMeld in-the-wild with six families for an eight-week period, and discuss the key findings and further implications.1

    Towards Understanding Relational Orientation: Attachment Theory and Facebook Activities

    No full text
    Knowing individuals' relational orientation is imperative for effective offline, as well as online, interactions and collaborations. We use attachment theory to examine the link between Facebook users' relational orientation (in terms of attachment styles: anxiety and avoidance) and their relational activities. Our research examines whether and how the two key relational processes identified in offline social relationships (self-expression and responsiveness) are manifested on online social networks and related to attachment styles. We describe our dataset of 640 Facebook users, their attachment scale survey results, and their 525,334 posts. We define four features that map onto relational activities on Facebook: status updates and status updates with emotional words (self-expression); comments and likes (responsiveness). We find significant relationships between the users' attachment styles and their self-expression and responsiveness activities on Facebook. A key takeaway of our research is that without relying on self-reported surveys, a computational analysis of a Facebook user's self-expressing and responding activities alone can reveal the user's underlying relational orientation (i.e., attachment style).1

    Classification of lung nodules in CT scans using three-dimensional deep convolutional neural networks with a checkpoint ensemble method

    No full text
    Abstract Background Accurately detecting and examining lung nodules early is key in diagnosing lung cancers and thus one of the best ways to prevent lung cancer deaths. Radiologists spend countless hours detecting small spherical-shaped nodules in computed tomography (CT) images. In addition, even after detecting nodule candidates, a considerable amount of effort and time is required for them to determine whether they are real nodules. The aim of this paper is to introduce a high performance nodule classification method that uses three dimensional deep convolutional neural networks (DCNNs) and an ensemble method to distinguish nodules between non-nodules. Methods In this paper, we use a three dimensional deep convolutional neural network (3D DCNN) with shortcut connections and a 3D DCNN with dense connections for lung nodule classification. The shortcut connections and dense connections successfully alleviate the gradient vanishing problem by allowing the gradient to pass quickly and directly. Connections help deep structured networks to obtain general as well as distinctive features of lung nodules. Moreover, we increased the dimension of DCNNs from two to three to capture 3D features. Compared with shallow 3D CNNs used in previous studies, deep 3D CNNs more effectively capture the features of spherical-shaped nodules. In addition, we use an alternative ensemble method called the checkpoint ensemble method to boost performance. Results The performance of our nodule classification method is compared with that of the state-of-the-art methods which were used in the LUng Nodule Analysis 2016 Challenge. Our method achieves higher competition performance metric (CPM) scores than the state-of-the-art methods using deep learning. In the experimental setup ESB-ALL, the 3D DCNN with shortcut connections and the 3D DCNN with dense connections using the checkpoint ensemble method achieved the highest CPM score of 0.910. Conclusion The result demonstrates that our method of using a 3D DCNN with shortcut connections, a 3D DCNN with dense connections, and the checkpoint ensemble method is effective for capturing 3D features of nodules and distinguishing nodules between non-nodules

    My Being to Your Place, Your Being to My Place: Co-present Robotic Avatars Create Illusion of Living Together

    No full text
    People in work-separated families have been heavily relying on cutting-edge face-to-face communication services. Despite their ease of use and ubiquitous availability, experiences in living together are still far incomparable to those through remote face-toface communication. We envision that enabling a remote person to be spatially superposed in one’s living space would be a breakthrough to catalyze pseudo living-together interactivity. We propose HomeMeld, a zero-hassle self-mobile robotic system serving as a co-present avatar to create a persistent illusion of living together for those who are involuntarily living apart. The key challenges are 1) continuous spatial mapping between two heterogeneous floor plans and 2) navigating the robotic avatar to reflect the other’s presence in real time under the limited maneuverability of the robot. We devise a notion of functionally equivalent location and orientation to translate a person’s presence into another in a heterogeneous floor plan. We also develop predictive path warping to seamlessly synchronize the presence of the other. We conducted extensive experiments and deployment studies with real participants.1

    Detection of masses in mammograms using a one-stage object detector based on a deep convolutional neural network.

    No full text
    Several computer aided diagnosis (CAD) systems have been developed for mammography. They are widely used in certain countries such as the U.S. where mammography studies are conducted more frequently; however, they are not yet globally employed for clinical use due to their inconsistent performance, which can be attributed to their reliance on hand-crafted features. It is difficult to use hand-crafted features for mammogram images that vary due to factors such as the breast density of patients and differences in imaging devices. To address these problems, several studies have leveraged a deep convolutional neural network that does not require hand-crafted features. Among the recent object detectors, RetinaNet is particularly promising as it is a simpler one-stage object detector that is fast and efficient while achieving state-of-the-art performance. RetinaNet has been proven to perform conventional object detection tasks but has not been tested on detecting masses in mammograms. Thus, we propose a mass detection model based on RetinaNet. To validate its performance in diverse use cases, we construct several experimental setups using the public dataset INbreast and the in-house dataset GURO. In addition to training and testing on the same dataset (i.e., training and testing on INbreast), we evaluate our mass detection model in setups using additional training data (i.e., training on INbreast + GURO and testing on INbreast). We also evaluate our model in setups using pre-trained weights (i.e., using weights pre-trained on GURO, training and testing on INbreast). In all the experiments, our mass detection model achieves comparable or better performance than more complex state-of-the-art models including the two-stage object detector. Also, the results show that using the weights pre-trained on datasets achieves similar performance as directly using datasets in the training phase. Therefore, we make our mass detection model's weights pre-trained on both GURO and INbreast publicly available. We expect that researchers who train RetinaNet on their in-house dataset for the mass detection task can use our pre-trained weights to leverage the features extracted from the datasets
    corecore