622 research outputs found

    MicroRNAs in Invasion and Metastasis in Lung Cancer

    Get PDF

    Real-Time Neural Video Recovery and Enhancement on Mobile Devices

    Full text link
    As mobile devices become increasingly popular for video streaming, it's crucial to optimize the streaming experience for these devices. Although deep learning-based video enhancement techniques are gaining attention, most of them cannot support real-time enhancement on mobile devices. Additionally, many of these techniques are focused solely on super-resolution and cannot handle partial or complete loss or corruption of video frames, which is common on the Internet and wireless networks. To overcome these challenges, we present a novel approach in this paper. Our approach consists of (i) a novel video frame recovery scheme, (ii) a new super-resolution algorithm, and (iii) a receiver enhancement-aware video bit rate adaptation algorithm. We have implemented our approach on an iPhone 12, and it can support 30 frames per second (FPS). We have evaluated our approach in various networks such as WiFi, 3G, 4G, and 5G networks. Our evaluation shows that our approach enables real-time enhancement and results in a significant increase in video QoE (Quality of Experience) of 24\% - 82\% in our video streaming system

    PACMNET, V2, CoNEXT1, March 2024 Editorial

    Get PDF
    The Proceedings of the ACM on Networking (PACMNET) series present the highest-quality research conducted in the areas of emerging computer networks and their applications. We encourage submissions that present new technologies, novel experimentation, creative use of networking technologies, and new insights made possible using analysis. The journal is strongly supported by the ACM Special Interest Group on Communications and Computer Networks (SIGCOMM) and involves top-level researchers in its Editorial Board. This issue contains four papers submitted for the June '23 deadline that were revised by their authors based on the extensive feedback provided by the reviewers. We would like to thank the many individuals who contributed to this issue of PACMNET, in particular the authors, who submitted their best work to PACMNET, and the Associate Editors, who provided constructive feedback to the authors in their reviews, participated in the online discussions and the Editors' meeting. We would like also to thank the SIGCOMM Executive Committee Chair and the CoNEXT Steering Committee members who supported and guided us as usual

    Neural-Symbolic VideoQA: Learning Compositional Spatio-Temporal Reasoning for Real-world Video Question Answering

    Full text link
    Compositional spatio-temporal reasoning poses a significant challenge in the field of video question answering (VideoQA). Existing approaches struggle to establish effective symbolic reasoning structures, which are crucial for answering compositional spatio-temporal questions. To address this challenge, we propose a neural-symbolic framework called Neural-Symbolic VideoQA (NS-VideoQA), specifically designed for real-world VideoQA tasks. The uniqueness and superiority of NS-VideoQA are two-fold: 1) It proposes a Scene Parser Network (SPN) to transform static-dynamic video scenes into Symbolic Representation (SR), structuralizing persons, objects, relations, and action chronologies. 2) A Symbolic Reasoning Machine (SRM) is designed for top-down question decompositions and bottom-up compositional reasonings. Specifically, a polymorphic program executor is constructed for internally consistent reasoning from SR to the final answer. As a result, Our NS-VideoQA not only improves the compositional spatio-temporal reasoning in real-world VideoQA task, but also enables step-by-step error analysis by tracing the intermediate results. Experimental evaluations on the AGQA Decomp benchmark demonstrate the effectiveness of the proposed NS-VideoQA framework. Empirical studies further confirm that NS-VideoQA exhibits internal consistency in answering compositional questions and significantly improves the capability of spatio-temporal and logical inference for VideoQA tasks

    hsa-miR-125a-5p Enhances Invasion Ability in Non-Small Lung Carcinoma Cell Lines

    Get PDF
    Background and objective MicroRNAs (miRNAs) are short non-coding RNAs that posttranscriptionally regulate gene expression by partially binding complementary to target sites in mRNAs. Although some impaired miRNA regulations have been observed in many human cancers, the functions of miR-125a are still unclear. The aim of this study is to investigate the expression of hsa-miR-125a-5p in NSCLC cell lines and the relationship between hsa-miR-125a-5p and the invasion of lung cancer cells. Methods The expression of hsa-miR-125a-5p and the effectiveness for a given period time after being transfected sense hsa-miR-125a-5p 2’-O-methyl oligonucleotide, which were 24 h, 36 h, 48 h, 60 h and 72 h, were examined by realtime PCR. Meanwhile, we investigated the modification of invasive ability in A549 and NCI-H460 cells by transwell. Results Real-time PCR showed that hsa-miR-125a-5p was poorly-expressed in 6 lung cancer cell lines, especially in LH7, NCI-H460, SPC-A-1 and A549. The highest expression of hsa-miR-125a-5p occurred in the cells transfected with sense hsa-miR-125a-5p 2’-O-methyl oligonucleotide 36 h. Furthermore, the invasive abilities of A549 and NCI-H46O were enhanced by up-regulating hsa-miR-125a-5p. Conclusion hsa-miR-125a-5p was poorly-expressed in lung cancer cells and it could enhance lung cancer cell invasion by up-regulating hsa-miR-125a-5p

    LLM-RadJudge: Achieving Radiologist-Level Evaluation for X-Ray Report Generation

    Full text link
    Evaluating generated radiology reports is crucial for the development of radiology AI, but existing metrics fail to reflect the task's clinical requirements. This study proposes a novel evaluation framework using large language models (LLMs) to compare radiology reports for assessment. We compare the performance of various LLMs and demonstrate that, when using GPT-4, our proposed metric achieves evaluation consistency close to that of radiologists. Furthermore, to reduce costs and improve accessibility, making this method practical, we construct a dataset using LLM evaluation results and perform knowledge distillation to train a smaller model. The distilled model achieves evaluation capabilities comparable to GPT-4. Our framework and distilled model offer an accessible and efficient evaluation method for radiology report generation, facilitating the development of more clinically relevant models. The model will be further open-sourced and accessible.Comment: 11 pages, 6 figure

    Neural Video Recovery for Cloud Gaming

    Full text link
    Cloud gaming is a multi-billion dollar industry. A client in cloud gaming sends its movement to the game server on the Internet, which renders and transmits the resulting video back. In order to provide a good gaming experience, a latency below 80 ms is required. This means that video rendering, encoding, transmission, decoding, and display have to finish within that time frame, which is especially challenging to achieve due to server overload, network congestion, and losses. In this paper, we propose a new method for recovering lost or corrupted video frames in cloud gaming. Unlike traditional video frame recovery, our approach uses game states to significantly enhance recovery accuracy and utilizes partially decoded frames to recover lost portions. We develop a holistic system that consists of (i) efficiently extracting game states, (ii) modifying H.264 video decoder to generate a mask to indicate which portions of video frames need recovery, and (iii) designing a novel neural network to recover either complete or partial video frames. Our approach is extensively evaluated using iPhone 12 and laptop implementations, and we demonstrate the utility of game states in the game video recovery and the effectiveness of our overall design
    • …
    corecore