822 research outputs found

    Analysis and Extraction of Tempo-Spatial Events in an Efficient Archival CDN with Emphasis on Telegram

    Full text link
    This paper presents an efficient archival framework for exploring and tracking cyberspace large-scale data called Tempo-Spatial Content Delivery Network (TS-CDN). Social media data streams are renewing in time and spatial dimensions. Various types of websites and social networks (i.e., channels, groups, pages, etc.) are considered spatial in cyberspace. Accurate analysis entails encompassing the bulk of data. In TS-CDN by applying the hash function on big data an efficient content delivery network is created. Using hash function rebuffs data redundancy and leads to conclude unique data archive in large-scale. This framework based on entered query allows for apparent monitoring and exploring data in tempo-spatial dimension based on TF-IDF score. Also by conformance from i18n standard, the Unicode problem has been dissolved. For evaluation of TS-CDN framework, a dataset from Telegram news channels from March 23, 2020 (1399-01-01), to September 21, 2020 (1399-06-31) on topics including Coronavirus (COVID-19), vaccine, school reopening, flood, earthquake, justice shares, petroleum, and quarantine exploited. By applying hash on Telegram dataset in the mentioned time interval, a significant reduction in media files such as 39.8% for videos (from 79.5 GB to 47.8 GB), and 10% for images (from 4 GB to 3.6 GB) occurred. TS-CDN infrastructure in a web-based approach has been presented as a service-oriented system. Experiments conducted on enormous time series data, including different spatial dimensions (i.e., Khabare Fouri, Khabarhaye Fouri, Akhbare Rouze Iran, and Akhbare Rasmi Telegram news channels), demonstrate the efficiency and applicability of the implemented TS-CDN framework

    Automatic object classification for surveillance videos.

    Get PDF
    PhDThe recent popularity of surveillance video systems, specially located in urban scenarios, demands the development of visual techniques for monitoring purposes. A primary step towards intelligent surveillance video systems consists on automatic object classification, which still remains an open research problem and the keystone for the development of more specific applications. Typically, object representation is based on the inherent visual features. However, psychological studies have demonstrated that human beings can routinely categorise objects according to their behaviour. The existing gap in the understanding between the features automatically extracted by a computer, such as appearance-based features, and the concepts unconsciously perceived by human beings but unattainable for machines, or the behaviour features, is most commonly known as semantic gap. Consequently, this thesis proposes to narrow the semantic gap and bring together machine and human understanding towards object classification. Thus, a Surveillance Media Management is proposed to automatically detect and classify objects by analysing the physical properties inherent in their appearance (machine understanding) and the behaviour patterns which require a higher level of understanding (human understanding). Finally, a probabilistic multimodal fusion algorithm bridges the gap performing an automatic classification considering both machine and human understanding. The performance of the proposed Surveillance Media Management framework has been thoroughly evaluated on outdoor surveillance datasets. The experiments conducted demonstrated that the combination of machine and human understanding substantially enhanced the object classification performance. Finally, the inclusion of human reasoning and understanding provides the essential information to bridge the semantic gap towards smart surveillance video systems

    Efficient traffic congestion estimation using multiple spatio-temporal properties

    Get PDF
    Traffic estimation is an important issue to analyze the traffic congestion in large-scale urban traffic situations. Recently, many researchers have used GPS data to estimate traffic congestion. However, how to fuse the multiple data reasonably and guarantee the accuracy and efficiency of these methods are still challenging problems. In this paper, we propose a novel method Multiple Data Estimation (MDE) to estimate the congestion status in urban environment with GPS trajectory data efficiently, where we estimate the congestion status of the area through utilizing multiple properties, including density, velocity, inflow and previous status. Among them, traffic inflow and previous status (combination of time and space factors) are not both used in other existing methods. In order to ensure the accuracy and efficiency, we apply dynamic weights of data and parameters in MDE method. To evaluate our methods, we apply it on large-scale taxi GPS data of Beijing and Shanghai. Extensive experiments on these two real-world datasets demonstrate the significant improvements of our method over several state-of-the-art methods

    Video object extraction in distributed surveillance systems

    Get PDF
    Recently, automated video surveillance and related video processing algorithms have received considerable attention from the research community. Challenges in video surveillance rise from noise, illumination changes, camera motion, splits and occlusions, complex human behavior, and how to manage extracted surveillance information for delivery, archiving, and retrieval: Many video surveillance systems focus on video object extraction, while few focus on both the system architecture and video object extraction. We focus on both and integrate them to produce an end-to-end system and study the challenges associated with building this system. We propose a scalable, distributed, and real-time video-surveillance system with a novel architecture, indexing, and retrieval. The system consists of three modules: video workstations for processing, control workstations for monitoring, and a server for management and archiving. The proposed system models object features as temporal Gaussians and produces: an 18 frames/second frame-rate for SIF video and static cameras, reduced network and storage usage, and precise retrieval results. It is more scalable and delivers more balanced distributed performance than recent architectures. The first stage of video processing is noise estimation. We propose a method for localizing homogeneity and estimating the additive white Gaussian noise variance, which uses spatially scattered initial seeds and utilizes particle filtering techniques to guide their spatial movement towards homogeneous locations from which the estimation is performed. The noise estimation method reduces the number of measurements required by block-based methods while achieving more accuracy. Next, we segment video objects using a background subtraction technique. We generate the background model online for static cameras using a mixture of Gaussians background maintenance approach. For moving cameras, we use a global motion estimation method offline to bring neighboring frames into the coordinate system of the current frame and we merge them to produce the background model. We track detected objects using a feature-based object tracking method with improved detection and correction of occlusion and split. We detect occlusion and split through the identification of sudden variations in the spatia-temporal features of objects. To detect splits, we analyze the temporal behavior of split objects to discriminate between errors in segmentation and real separation of objects. Both objective and subjective experimental results show the ability of the proposed algorithm to detect and correct both splits and occlusions of objects. For the last stage of video processing, we propose a novel method for the detection of vandalism events which is based on a proposed definition for vandal behaviors recorded on surveillance video sequences. We monitor changes inside a restricted site containing vandalism-prone objects and declare vandalism when an object is detected as leaving the site while there is temporally consistent and significant static changes representing damage, given that the site is normally unchanged after use. The proposed method is tested on sequences showing real and simulated vandal behaviors and it achieves a detection rate of 96%. It detects different forms of vandalism such as graffiti and theft. The proposed end-ta-end video surveillance system aims at realizing the potential of video object extraction in automated surveillance and retrieval by focusing on both video object extraction and the management, delivery, and utilization of the extracted informatio

    Online Moving Object Visualization with Geo-Referenced Data

    Get PDF
    As a result of the rapid evolution of smart mobile devices and the wide application of satellite-based positioning devices, the moving object database (MOD) has become a hot research topic in recent years. The moving objects generate a large amount of geo-referenced data in different types, such as videos, audios, images and sensor logs. In order to better analyze and utilize the data, it is useful and necessary to visualize the data on a map. With the rise of web mapping, visualizing the moving object and geo-referenced data has never been so easy. While displaying the trajectory of a moving object is a mature technology, there is little research on visualizing both the location and data of the moving objects in a synchronized manner. This dissertation proposes a general moving object visualization model to address the above problem. This model divides the spatial data visualization systems into four categories. Another contribution of this dissertation is to provide a framework, which deals with all these visualization tasks with synchronization control in mind. This platform relies on the TerraFly web mapping system. To evaluate the universality and effectiveness of the proposed framework, this dissertation presents four visualization systems to deal with a variety of situations and different data types

    Analysis for Scalable Coding of Quality-Adjustable Sensor Data

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2014. 2. ์‹ ํ˜„์‹.Machine-generated data such as sensor data now comprise major portion of available information. This thesis addresses two important problems: storing of massive sensor data collection and efficient sensing. We first propose a quality-adjustable sensor data archiving, which compresses entire collection of sensor data efficiently without compromising key features. Considering the data aging aspect of sensor data, we make our archiving scheme capable of controlling data fidelity to exploit less frequent data access of user. This flexibility on quality adjustability leads to more efficient usage of storage space. In order to store data from various sensor types in cost-effective way, we study the optimal storage configuration strategy using analytical models that capture characteristics of our scheme. This strategy helps storing sensor data blocks with the optimal configurations that maximizes data fidelity of various sensor data under given storage space. Next, we consider efficient sensing schemes and propose a quality-adjustable sensing scheme. We adopt compressive sensing (CS) that is well suited for resource-limited sensors because of its low computational complexity. We enhance quality adjustability intrinsic to CS with quantization and especially temporal downsampling. Our sensing architecture provides more rate-distortion operating points than previous schemes, which enables sensors to adapt data quality in more efficient way considering overall performance. Moreover, the proposed temporal downsampling improves coding efficiency that is a drawback of CS. At the same time, the downsampling further reduces computational complexity of sensing devices, along with sparse random matrix. As a result, our quality-adjustable sensing can deliver gains to a wide variety of resource-constrained sensing techniques.Abstract i Contents iii List of Figures vi List of Tables x Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Spatio-Temporal Correlation in Sensor Data 3 1.3 Quality Adjustability of Sensor Data 7 1.4 Research Contributions 9 1.5 Thesis Organization 11 Chapter 2 Archiving of Sensor Data 12 2.1 Encoding Sensor Data Collection 12 2.1.1 Archiving Architecture 13 2.1.2 Data Conversion 16 2.2 Compression Ratio Comparison 20 2.3 Quality-Adjustable Archiving Model 25 2.3.1 Data Fidelity Model: Rate 25 2.3.2 Data Fidelity Model: Distortion 28 2.4 QP-Rate-Distortion Model 36 2.5 Optimal Rate Allocation 40 2.5.1 Rate Allocation Strategy 40 2.5.2 Optimal Storage Configuration 41 2.5.3 Experimental Results 44 Chapter 3 Scalable Management of Storage 46 3.1 Scalable Quality Management 46 3.1.1 Archiving Architecture 47 3.1.2 Compression Ratio Comparison 49 3.2 Enhancing Quality Adjustability 51 3.2.1 Data Fidelity Model: Rate 52 3.2.2 Data Fidelity Model: Distortion 55 3.3 Optimal Rate Allocation 59 3.3.1 Rate Allocation Strategy 60 3.3.2 Optimal Storage Configuration 63 3.3.3 Experimental Results 71 Chapter 4 Quality-Adjustable Sensing 73 4.1 Compressive Sensing 73 4.1.1 Compressive Sensing Problem 74 4.1.2 General Signal Recovery 76 4.1.3 Noisy Signal Recovery 76 4.2 Quality Adjustability in Sensing Environment 77 4.2.1 Quantization and Temporal Downsampling 79 4.2.2 Optimization with Error Model 85 4.3 Low-Complexity Sensing 88 4.3.1 Sparse Random Matrix 89 4.3.2 Resource Savings 92 Chapter 5 Conclusions 96 5.1 Summary 96 5.2 Future Research Directions 98 Bibliography 100 Abstract in Korean 109Docto

    Improving the Availability of Space Research Spatial Data*

    Get PDF
    The rapid development of space technology and the increased interest in space exploration have resulted in the intensive observation of celestial bodies, mostly in the solar system, over the past decade with the prospect of an upward trend in the future. Large amounts of collected data on space bodies impose the need to develop the Spatial Data Infrastructure of Celestial Bodies at the general level to enable standardized organization and storage of these data, and their efficient use and exchange. To approach the development of such an infrastructure, it is necessary to investigate what data, as well as how and to what extent, are collected through space observation. It is also necessary to investigate how this data can be obtained. This paper provides an overview of planetary spatial data archives, data storage and retrieval methods, and their shortcomings in the context of easy search, download and interpretation of data, all with the aim of establishing Spatial Data Infrastructure of Celestial Bodies that would make space data more accessible to the public and non-planetary scientists

    Multimedia Annotation Interoperability Framework

    Get PDF
    Multimedia systems typically contain digital documents of mixed media types, which are indexed on the basis of strongly divergent metadata standards. This severely hamplers the inter-operation of such systems. Therefore, machine understanding of metadata comming from different applications is a basic requirement for the inter-operation of distributed Multimedia systems. In this document, we present how interoperability among metadata, vocabularies/ontologies and services is enhanced using Semantic Web technologies. In addition, it provides guidelines for semantic interoperability, illustrated by use cases. Finally, it presents an overview of the most commonly used metadata standards and tools, and provides the general research direction for semantic interoperability using Semantic Web technologies

    System Abstractions for Scalable Application Development at the Edge

    Get PDF
    Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler
    • โ€ฆ
    corecore