100 research outputs found

    Monte Carlo Linear Clustering with Single-Point Supervision is Enough for Infrared Small Target Detection

    Full text link
    Single-frame infrared small target (SIRST) detection aims at separating small targets from clutter backgrounds on infrared images. Recently, deep learning based methods have achieved promising performance on SIRST detection, but at the cost of a large amount of training data with expensive pixel-level annotations. To reduce the annotation burden, we propose the first method to achieve SIRST detection with single-point supervision. The core idea of this work is to recover the per-pixel mask of each target from the given single point label by using clustering approaches, which looks simple but is indeed challenging since targets are always insalient and accompanied with background clutters. To handle this issue, we introduce randomness to the clustering process by adding noise to the input images, and then obtain much more reliable pseudo masks by averaging the clustered results. Thanks to this "Monte Carlo" clustering approach, our method can accurately recover pseudo masks and thus turn arbitrary fully supervised SIRST detection networks into weakly supervised ones with only single point annotation. Experiments on four datasets demonstrate that our method can be applied to existing SIRST detection networks to achieve comparable performance with their fully supervised counterparts, which reveals that single-point supervision is strong enough for SIRST detection. Our code will be available at: https://github.com/YeRen123455/SIRST-Single-Point-Supervision

    VaBUS: Edge-Cloud Real-Time Video Analytics via Background Understanding and Subtraction

    Get PDF
    Edge-cloud collaborative video analytics is transforming the way data is being handled, processed, and transmitted from the ever-growing number of surveillance cameras around the world. To avoid wasting limited bandwidth on unrelated content transmission, existing video analytics solutions usually perform temporal or spatial filtering to realize aggressive compression of irrelevant pixels. However, most of them work in a context-agnostic way while being oblivious to the circumstances where the video content is happening and the context-dependent characteristics under the hood. In this work, we propose VaBUS, a real-time video analytics system that leverages the rich contextual information of surveillance cameras to reduce bandwidth consumption for semantic compression. As a task-oriented communication system, VaBUS dynamically maintains the background image of the video on the edge with minimal system overhead and sends only highly confident Region of Interests (RoIs) to the cloud through adaptive weighting and encoding. With a lightweight experience-driven learning module, VaBUS is able to achieve high offline inference accuracy even when network congestion occurs. Experimental results show that VaBUS reduces bandwidth consumption by 25.0%-76.9% while achieving 90.7% accuracy for both the object detection and human keypoint detection tasks

    A Survey of Large Language Models

    Full text link
    Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.Comment: ongoing work; 51 page

    Combination of Walnut Peptide and Casein Peptide alleviates anxiety and improves memory in anxiety mices

    Get PDF
    IntroductionAnxiety disorders continue to prevail as the most prevalent cluster of mental disorders following the COVID-19 pandemic, exhibiting substantial detrimental effects on individuals’ overall well-being and functioning. Even after a search spanning over a decade for novel anxiolytic compounds, none have been approved, resulting in the current anxiolytic medications being effective only for a specific subset of patients. Consequently, researchers are investigating everyday nutrients as potential alternatives to conventional medicines. Our prior study analyzed the antianxiety and memory-enhancing properties of the combination of Walnut Peptide (WP) and Casein Peptide (CP) in zebrafish.Methods and ResultsBased on this work, our current research further validates their effects in mice models exhibiting elevated anxiety levels through a combination of gavage oral administration. Our results demonstrated that at 170 + 300 mg human dose, the WP + CP combination significantly improved performances in relevant behavioral assessments related to anxiety and memory. Furthermore, our analysis revealed that the combination restores neurotransmitter dysfunction observed while monitoring Serotonin, gamma-aminobutyric acid (GABA), dopamine (DA), and acetylcholine (ACh) levels. This supplementation also elevated the expression of brain-derived neurotrophic factor mRNA, indicating protective effects against the neurological stresses of anxiety. Additionally, there were strong correlations among behavioral indicators, BDNF (brain-derived neurotrophic factor), and numerous neurotransmitters.ConclusionHence, our findings propose that the WP + CP combination holds promise as a treatment for anxiety disorder. Besides, supplementary applications are feasible when produced as powdered dietary supplements or added to common foods like powder, yogurt, or milk

    Milk fat globule membrane promotes brain development in piglets by enhancing the connection of white matter fiber trace

    Get PDF
    IntroductionBrain development during infancy is crucial for later health and development. Although Milk Fat Globule Membrane (MFGM) has been demonstrated to enhance brain development, further investigation is needed to determine the optimal dose.MethodsIn this study, 80 piglets aged 2 days were randomly assigned to four groups: Control group, MFGM-L (1.74 g MFGM per 100 g diet), MFGM-M (4.64 g MFGM per 100 g diet), and MFGM-H (6.09 g MFGM per 100 g diet). Daily body weight and milk intake of the piglets were recorded until 31 days postnatal. Learning and memory abilities were evaluated using the spatial T-maze test on day 15. MRI analysis was conducted to assess functional and structural changes in brain tissues. Additionally, mRNA and protein expression of brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NTF-3) in the hippocampus and prefrontal cortex were evaluated.ResultsThe results indicated that the MFGM supplemented diet significantly improved the accuracy of the piglets in the T-maze test, with the MFGM-L group exhibiting the best performance. MRI showed no volumetric differences in the gray and white matter between the groups. However, the fractional anisotropy in the left and right hippocampus of piglets in the MFGM-L group was significantly higher than in the other three groups. Furthermore, there was a strong correlation between the accuracy of the T-maze test and hippocampal fractional anisotropy.DiscussionThe MFGM supplemented diet also increased the expression of BDNF in the cerebral cortex. However, the changes in BDNF were not consistent with the results of the T-maze test. In conclusion, adding 1.74 g MFGM per 100 g diet can significantly improve neonatal piglets’ learning and memory abilities, potentially by enhancing the connection of white matter fiber bundles in the brain
    • …
    corecore