496 research outputs found

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Enhanced Multimedia Exchanges over the Internet

    Get PDF
    Although the Internet was not originally designed for exchanging multimedia streams, consumers heavily depend on it for audiovisual data delivery. The intermittent nature of multimedia traffic, the unguaranteed underlying communication infrastructure, and dynamic user behavior collectively result in the degradation of Quality-of-Service (QoS) and Quality-of-Experience (QoE) perceived by end-users. Consequently, the volume of signalling messages is inevitably increased to compensate for the degradation of the desired service qualities. Improved multimedia services could leverage adaptive streaming as well as blockchain-based solutions to enhance media-rich experiences over the Internet at the cost of increased signalling volume. Many recent studies in the literature provide signalling reduction and blockchain-based methods for authenticated media access over the Internet while utilizing resources quasi-efficiently. To further increase the efficiency of multimedia communications, novel signalling overhead and content access latency reduction solutions are investigated in this dissertation including: (1) the first two research topics utilize steganography to reduce signalling bandwidth utilization while increasing the capacity of the multimedia network; and (2) the third research topic utilizes multimedia content access request management schemes to guarantee throughput values for servicing users, end-devices, and the network. Signalling of multimedia streaming is generated at every layer of the communication protocol stack; At the highest layer, segment requests are generated, and at the lower layers, byte tracking messages are exchanged. Through leveraging steganography, essential signalling information is encoded within multimedia payloads to reduce the amount of resources consumed by non-payload data. The first steganographic solution hides signalling messages within multimedia payloads, thereby freeing intermediate node buffers from queuing non-payload packets. Consequently, source nodes are capable of delivering control information to receiving nodes at no additional network overhead. A utility function is designed to minimize the volume of overhead exchanged while minimizing visual artifacts. Therefore, the proposed scheme is designed to leverage the fidelity of the multimedia stream to reduce the largest amount of control overhead with the lowest negative visual impact. The second steganographic solution enables protocol translation through embedding packet header information within payload data to alternatively utilize lightweight headers. The protocol translator leverages a proposed utility function to enable the maximum number of translations while maintaining QoS and QoE requirements in terms of packet throughput and playback bit-rate. As the number of multimedia users and sources increases, decentralized content access and management over a blockchain-based system is inevitable. Blockchain technologies suffer from large processing latencies; consequently reducing the throughput of a multimedia network. Reducing blockchain-based access latencies is therefore essential to maintaining a decentralized scalable model with seamless functionality and efficient utilization of resources. Adapting blockchains to feeless applications will then port the utility of ledger-based networks to audiovisual applications in a faultless manner. The proposed transaction processing scheme will enable ledger maintainers in sustaining desired throughputs necessary for delivering expected QoS and QoE values for decentralized audiovisual platforms. A block slicing algorithm is designed to ensure that the ledger maintenance strategy is benefiting the operations of the blockchain-based multimedia network. Using the proposed algorithm, the throughput and latency of operations within the multimedia network are then maintained at a desired level

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    DCT-Based Image Feature Extraction and Its Application in Image Self-Recovery and Image Watermarking

    Get PDF
    Feature extraction is a critical element in the design of image self-recovery and watermarking algorithms and its quality can have a big influence on the performance of these processes. The objective of the work presented in this thesis is to develop an effective methodology for feature extraction in the discrete cosine transform (DCT) domain and apply it in the design of adaptive image self-recovery and image watermarking algorithms. The methodology is to use the most significant DCT coefficients that can be at any frequency range to detect and to classify gray level patterns. In this way, gray level variations with a wider range of spatial frequencies can be looked into without increasing computational complexity and the methodology is able to distinguish gray level patterns rather than the orientations of simple edges only as in many existing DCT-based methods. The proposed image self-recovery algorithm uses the developed feature extraction methodology to detect and classify blocks that contain significant gray level variations. According to the profile of each block, the critical frequency components representing the specific gray level pattern of the block are chosen for encoding. The code lengths are made variable depending on the importance of these components in defining the block’s features, which makes the encoding of critical frequency components more precise, while keeping the total length of the reference code short. The proposed image self-recovery algorithm has resulted in remarkably shorter reference codes that are only 1/5 to 3/5 of those produced by existing methods, and consequently a superior visual quality in the embedded images. As the shorter codes contain the critical image information, the proposed algorithm has also achieved above average reconstruction quality for various tampering rates. The proposed image watermarking algorithm is computationally simple and designed for the blind extraction of the watermark. The principle of the algorithm is to embed the watermark in the locations where image data alterations are the least visible. To this end, the properties of the HVS are used to identify the gray level image features of such locations. The characteristics of the frequency components representing these features are identifying by applying the DCT-based feature extraction methodology developed in this thesis. The strength with which the watermark is embedded is made adaptive to the local gray level characteristics. Simulation results have shown that the proposed watermarking algorithm results in significantly higher visual quality in the watermarked images than that of the reported methods with a difference in PSNR of about 2.7 dB, while the embedded watermark is highly robustness against JPEG compression even at low quality factors and to some other common image processes. The good performance of the proposed image self-recovery and watermarking algorithms is an indication of the effectiveness of the developed feature extraction methodology. This methodology can be applied in a wide range of applications and it is suitable for any process where the DCT data is available

    Novel Processing and Transmission Techniques Leveraging Edge Computing for Smart Health Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    A monitoring strategy for application to salmon-bearing watersheds

    Get PDF

    Collaborative Camouflaged Object Detection: A Large-Scale Dataset and Benchmark

    Full text link
    In this paper, we provide a comprehensive study on a new task called collaborative camouflaged object detection (CoCOD), which aims to simultaneously detect camouflaged objects with the same properties from a group of relevant images. To this end, we meticulously construct the first large-scale dataset, termed CoCOD8K, which consists of 8,528 high-quality and elaborately selected images with object mask annotations, covering 5 superclasses and 70 subclasses. The dataset spans a wide range of natural and artificial camouflage scenes with diverse object appearances and backgrounds, making it a very challenging dataset for CoCOD. Besides, we propose the first baseline model for CoCOD, named bilateral-branch network (BBNet), which explores and aggregates co-camouflaged cues within a single image and between images within a group, respectively, for accurate camouflaged object detection in given images. This is implemented by an inter-image collaborative feature exploration (CFE) module, an intra-image object feature search (OFS) module, and a local-global refinement (LGR) module. We benchmark 18 state-of-the-art models, including 12 COD algorithms and 6 CoSOD algorithms, on the proposed CoCOD8K dataset under 5 widely used evaluation metrics. Extensive experiments demonstrate the effectiveness of the proposed method and the significantly superior performance compared to other competitors. We hope that our proposed dataset and model will boost growth in the COD community. The dataset, model, and results will be available at: https://github.com/zc199823/BBNet--CoCOD.Comment: Accepted by IEEE Transactions on Neural Networks and Learning Systems (TNNLS
    • …
    corecore