129 research outputs found

    Uncovering wireless blackspots using Twitter data

    Get PDF
    Blackspots are areas of poor signal coverage or service delivery that leads to customer complaints and loss in business revenue. Understanding their spatial-temporal patterns at a high resolution is important for interventions. Conventional methods such as customer helplines, drive-by testing, and network analysis tools often lack the real-time capability and spatial accuracy required. In this paper, we investigate the potential of utilizing geo-tagged Twitter data to uncover blackspots. Here, we apply lexicon and machine-learning natural language processing techniques to over 1.4 million Tweets in London to uncover blackspots for both pre-4G (2012) and post-4G (2016) roll out. It was found that long-term poor signal complaints make up the majority of complaints (86%) pre-4G roll out, but short-term network failure was responsible for most complaints (66%) post-4G roll out

    Lightweight Salient Object Detection in Optical Remote-Sensing Images via Semantic Matching and Edge Alignment

    Full text link
    Recently, relying on convolutional neural networks (CNNs), many methods for salient object detection in optical remote sensing images (ORSI-SOD) are proposed. However, most methods ignore the huge parameters and computational cost brought by CNNs, and only a few pay attention to the portability and mobility. To facilitate practical applications, in this paper, we propose a novel lightweight network for ORSI-SOD based on semantic matching and edge alignment, termed SeaNet. Specifically, SeaNet includes a lightweight MobileNet-V2 for feature extraction, a dynamic semantic matching module (DSMM) for high-level features, an edge self-alignment module (ESAM) for low-level features, and a portable decoder for inference. First, the high-level features are compressed into semantic kernels. Then, semantic kernels are used to activate salient object locations in two groups of high-level features through dynamic convolution operations in DSMM. Meanwhile, in ESAM, cross-scale edge information extracted from two groups of low-level features is self-aligned through L2 loss and used for detail enhancement. Finally, starting from the highest-level features, the decoder infers salient objects based on the accurate locations and fine details contained in the outputs of the two modules. Extensive experiments on two public datasets demonstrate that our lightweight SeaNet not only outperforms most state-of-the-art lightweight methods but also yields comparable accuracy with state-of-the-art conventional methods, while having only 2.76M parameters and running with 1.7G FLOPs for 288x288 inputs. Our code and results are available at https://github.com/MathLee/SeaNet.Comment: 11 pages, 4 figures, Accepted by IEEE Transactions on Geoscience and Remote Sensing 202

    Deep Learning for Financial Time Series Prediction : A State-of-the-Art Review of Standalone and Hybrid Models

    Get PDF
    Financial time series prediction, whether for classification or regression, has been a heated research topic over the last decade. While traditional machine learning algorithms have experienced mediocre results, deep learning has largely contributed to the elevation of the prediction performance. Currently, the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking, making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better, what techniques and components are involved, and how the model can be designed and implemented. This review article provides an overview of techniques, components and frameworks for financial time series prediction, with an emphasis on state-of-the-art deep learning models in the literature from 2015 to 2023, including standalone models like convolutional neural networks (CNN) that are capable of extracting spatial dependencies within data, and long short-term memory (LSTM) that is designed for handling temporal dependencies; and hybrid models integrating CNN, LSTM, attention mechanism (AM) and other techniques. For illustration and comparison purposes, models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input, output, feature extraction, prediction, and related processes. Among the state-of-the-art models, hybrid models like CNN-LSTM and CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model. Some remaining challenges have been discussed, including non-friendliness for finance domain experts, delayed prediction, domain knowledge negligence, lack of standards, and inability of real-time and high-frequency predictions. The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review, compare and summarize technologies and recent advances in this area, to facilitate smooth and informed implementation, and to highlight future research directions

    Robust Learning Based Condition Diagnosis Method for Distribution Network Switchgear

    Full text link
    This paper introduces a robust, learning-based method for diagnosing the state of distribution network switchgear, which is crucial for maintaining the power quality for end users. Traditional diagnostic models often rely heavily on expert knowledge and lack robustness. To address this, our method incorporates an expanded feature vector that includes environmental data, temperature readings, switch position, motor operation, insulation conditions, and local discharge information. We tackle the issue of high dimensionality through feature mapping. The method introduces a decision radius to categorize unlabeled samples and updates the model parameters using a combination of supervised and unsupervised loss, along with a consistency regularization function. This approach ensures robust learning even with a limited number of labeled samples. Comparative analysis demonstrates that this method significantly outperforms existing models in both accuracy and robustness

    (Section A: Planning Strategies and Design Concepts)

    Get PDF
    Global geological hazards have brought huge losses, and the fast development in China is no exception. At present, China\u27s hazard prevention and mitigation research and construction is mostly concentrated in the cities, while the rural, mountainous regions suffering the most serious damage and loss from geological hazards are neglected. In these areas, hazard prevention planning is missing or uses the city standard, lacking scientific analysis and theoretical support. Therefore, the study of disaster prevention and mitigation in remote regions is becoming more urgent. Existing studies on geological hazard prevention mainly focus on urban areas but ignore remote and rural areas where large numbers of people live. By drawing experience from disaster prevention and reduction in urban areas and incorporating effective scientific methods, this study aims to establish a planning support system for disaster mitigation to reduce the impact of disasters in rural areas on people and their property. The most significant contributions this research and practice offers is as follows. Firstly, the high-precision data of the villages, which is usually lacking and difficult to acquire, can easily and quickly be obtained by unmanned aerial vehicles (UVA) equipped with optical sensors and laser scanners. Secondly, combining high-precision data and the disaster evaluation model, geological disaster risk assessment technology has been developed for rural areas that addresses not only the natural factors but also human activities. Thirdly, based on disaster risk assessment technology, disaster prevention planning that has been constructed specifically for villages is more quantitative than before. Fourthly, with the application of a planning support system in disaster mitigation, a scientific and effective solution for disaster rescue can be achieved automatically. Lastly, this study selects a suitable area for implementation and demonstration, which can verify the feasibility and effectiveness of the system and enrich the knowledge base through a demonstration case. Based on the above research, a scientific hazard prevention strategy is put forward, which provides a scientific basis for decision-making and a support method for disaster prevention planning in villages

    GMS-3DQA: Projection-based Grid Mini-patch Sampling for 3D Model Quality Assessment

    Full text link
    Nowadays, most 3D model quality assessment (3DQA) methods have been aimed at improving performance. However, little attention has been paid to the computational cost and inference time required for practical applications. Model-based 3DQA methods extract features directly from the 3D models, which are characterized by their high degree of complexity. As a result, many researchers are inclined towards utilizing projection-based 3DQA methods. Nevertheless, previous projection-based 3DQA methods directly extract features from multi-projections to ensure quality prediction accuracy, which calls for more resource consumption and inevitably leads to inefficiency. Thus in this paper, we address this challenge by proposing a no-reference (NR) projection-based \textit{\underline{G}rid \underline{M}ini-patch \underline{S}ampling \underline{3D} Model \underline{Q}uality \underline{A}ssessment (GMS-3DQA)} method. The projection images are rendered from six perpendicular viewpoints of the 3D model to cover sufficient quality information. To reduce redundancy and inference resources, we propose a multi-projection grid mini-patch sampling strategy (MP-GMS), which samples grid mini-patches from the multi-projections and forms the sampled grid mini-patches into one quality mini-patch map (QMM). The Swin-Transformer tiny backbone is then used to extract quality-aware features from the QMMs. The experimental results show that the proposed GMS-3DQA outperforms existing state-of-the-art NR-3DQA methods on the point cloud quality assessment databases. The efficiency analysis reveals that the proposed GMS-3DQA requires far less computational resources and inference time than other 3DQA competitors. The code will be available at https://github.com/zzc-1998/GMS-3DQA

    Q-Refine: A Perceptual Quality Refiner for AI-Generated Image

    Full text link
    With the rapid evolution of the Text-to-Image (T2I) model in recent years, their unsatisfactory generation result has become a challenge. However, uniformly refining AI-Generated Images (AIGIs) of different qualities not only limited optimization capabilities for low-quality AIGIs but also brought negative optimization to high-quality AIGIs. To address this issue, a quality-award refiner named Q-Refine is proposed. Based on the preference of the Human Visual System (HVS), Q-Refine uses the Image Quality Assessment (IQA) metric to guide the refining process for the first time, and modify images of different qualities through three adaptive pipelines. Experimental shows that for mainstream T2I models, Q-Refine can perform effective optimization to AIGIs of different qualities. It can be a general refiner to optimize AIGIs from both fidelity and aesthetic quality levels, thus expanding the application of the T2I generation models.Comment: 6 pages, 5 figure

    You Can Mask More For Extremely Low-Bitrate Image Compression

    Full text link
    Learned image compression (LIC) methods have experienced significant progress during recent years. However, these methods are primarily dedicated to optimizing the rate-distortion (R-D) performance at medium and high bitrates (> 0.1 bits per pixel (bpp)), while research on extremely low bitrates is limited. Besides, existing methods fail to explicitly explore the image structure and texture components crucial for image compression, treating them equally alongside uninformative components in networks. This can cause severe perceptual quality degradation, especially under low-bitrate scenarios. In this work, inspired by the success of pre-trained masked autoencoders (MAE) in many downstream tasks, we propose to rethink its mask sampling strategy from structure and texture perspectives for high redundancy reduction and discriminative feature representation, further unleashing the potential of LIC methods. Therefore, we present a dual-adaptive masking approach (DA-Mask) that samples visible patches based on the structure and texture distributions of original images. We combine DA-Mask and pre-trained MAE in masked image modeling (MIM) as an initial compressor that abstracts informative semantic context and texture representations. Such a pipeline can well cooperate with LIC networks to achieve further secondary compression while preserving promising reconstruction quality. Consequently, we propose a simple yet effective masked compression model (MCM), the first framework that unifies MIM and LIC end-to-end for extremely low-bitrate image compression. Extensive experiments have demonstrated that our approach outperforms recent state-of-the-art methods in R-D performance, visual quality, and downstream applications, at very low bitrates. Our code is available at https://github.com/lianqi1008/MCM.git.Comment: Under revie

    Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision

    Full text link
    The rapid evolution of Multi-modality Large Language Models (MLLMs) has catalyzed a shift in computer vision from specialized models to general-purpose foundation models. Nevertheless, there is still an inadequacy in assessing the abilities of MLLMs on low-level visual perception and understanding. To address this gap, we present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment. a) To evaluate the low-level perception ability, we construct the LLVisionQA dataset, consisting of 2,990 diverse-sourced images, each equipped with a human-asked question focusing on its low-level attributes. We then measure the correctness of MLLMs on answering these questions. b) To examine the description ability of MLLMs on low-level information, we propose the LLDescribe dataset consisting of long expert-labelled golden low-level text descriptions on 499 images, and a GPT-involved comparison pipeline between outputs of MLLMs and the golden descriptions. c) Besides these two tasks, we further measure their visual quality assessment ability to align with human opinion scores. Specifically, we design a softmax-based strategy that enables MLLMs to predict quantifiable quality scores, and evaluate them on various existing image quality assessment (IQA) datasets. Our evaluation across the three abilities confirms that MLLMs possess preliminary low-level visual skills. However, these skills are still unstable and relatively imprecise, indicating the need for specific enhancements on MLLMs towards these abilities. We hope that our benchmark can encourage the research community to delve deeper to discover and enhance these untapped potentials of MLLMs. Project Page: https://vqassessment.github.io/Q-Bench.Comment: 25 pages, 14 figures, 9 tables, preprint versio

    Error performance and mutual information for IoNT interface system

    Get PDF
    Molecular communication and the internet of nanothings (IoNTs) are emerging research hotspots recently, which show great potential in biomedical applications inside the human body. However, how to transmit information from inside body IoNTs to outside devices is seldomly studied. It is well known that the nervous system is responsible for perceiving the external environment and controlling the feedback signals. It exactly works like an interface between the external and internal environment. Inspired by this, this paper proposes a novel concept that one can use the modified nervous system to communicate between IoNT devices and in vitro equipments. In our proposed system, nanomachines transmit signals via stimulating the nerve fiber by the electrode. Then the signals transmit along nerve fibers and muscle fibers. Finally, they cause changes in surface electromyography (sEMG) signals which can be decoded by the body surface receiver. The paper presents the framework of this entire through-body communication system. Each part of the framework is also mathematically modeled. The error probability and mutual information of the system are derived from the communication theory perspective, which are evaluated and analyzed through numerical results. This study can pave the way for the connection of IoNT in vivo to external networks
    corecore