15 research outputs found

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    SCALABALE AND DISTRIBUTED METHODS FOR LARGE-SCALE VISUAL COMPUTING

    Get PDF
    The objective of this research work is to develop efficient, scalable, and distributed methods to meet the challenges associated with the processing of immense growth in visual data like images, videos, etc. The motivation stems from the fact that the existing computer vision approaches are computation intensive and cannot scale-up to carry out analysis on the large collection of data as well as to perform the real-time inference on the resourceconstrained devices. Some of the issues encountered are: 1) increased computation time for high-level representation from low-level features, 2) increased training time for classification methods, and 3) carry out analysis in real-time on the live video streams in a city-scale surveillance network. The issue of scalability can be addressed by model approximation and distributed implementation of computer vision algorithms. But existing scalable approaches suffer from the high loss in model approximation and communication overhead. In this thesis, our aim is to address some of the issues by proposing efficient methods for reducing the training time over large datasets in a distributed environment, and for real-time inference on resource-constrained devices by scaling-up computation-intensive methods using the model approximation. A scalable method Fast-BoW is presented for reducing the computation time of bagof-visual-words (BoW) feature generation for both hard and soft vector-quantization with time complexities O(|h| log2 k) and O(|h| k), respectively, where |h| is the size of the hash table used in the proposed approach and k is the vocabulary size. We replace the process of finding the closest cluster center with a softmax classifier which improves the cluster boundaries over k-means and can also be used for both hard and soft BoW encoding. To make the model compact and faster, the real weights are quantized into integer weights which can be represented using few bits (2 − 8) only. Also, on the quantized weights, the hashing is applied to reduce the number of multiplications which accelerate the entire process. Further the effectiveness of the video representation is improved by exploiting the structural information among the various entities or same entity over the time which is generally ignored by BoW representation. The interactions of the entities in a video are formulated as a graph of geometric relations among space-time interest points. The activities represented as graphs are recognized using a SVM with low complexity graph kernels, namely, random walk kernel (O(n3)) and Weisfeiler-Lehman kernel (O(n)). The use of graph kernel provides robustness to slight topological deformations, which may occur due to the presence of noise and viewpoint variation in data. The further issues such as computation and storage of the large kernel matrix are addressed using the Nystrom method for kernel linearization. The second major contribution is in reducing the time taken in learning of kernel supvi port vector machine (SVM) from large datasets using distributed implementation while sustaining classification performance. We propose Genetic-SVM which makes use of the distributed genetic algorithm to reduce the time taken in solving the SVM objective function. Further, the data partitioning approaches achieve better speed-up than distributed algorithm approaches but invariably leads to the loss in classification accuracy as global support vectors may not have been chosen as local support vectors in their respective partitions. Hence, we propose DiP-SVM, a distribution preserving kernel SVM where the first and second order statistics of the entire dataset are retained in each of the partitions. This helps in obtaining local decision boundaries which are in agreement with the global decision boundary thereby reducing the chance of missing important global support vectors. Further, the task of combining the local SVMs hinder the training speed. To address this issue, we propose Projection-SVM, using subspace partitioning where a decision tree is constructed on a projection of data along the direction of maximum variance to obtain smaller partitions of the dataset. On each of these partitions, a kernel SVM is trained independently, thereby reducing the overall training time. Also, it results in reducing the prediction time significantly. Another issue addressed is the recognition of traffic violations and incidents in real-time in a city-scale surveillance scenario. The major issues are accurate detection and real-time inference. The central computing infrastructures are unable to perform in real-time due to large network delay from video sensor to the central computing server. We propose an efficient framework using edge computing for deploying large-scale visual computing applications which reduces the latency and the communication overhead in a camera network. This framework is implemented for two surveillance applications, namely, motorcyclists without a helmet and accident incident detection. An efficient cascade of convolutional neural networks (CNNs) is proposed for incrementally detecting motorcyclists and their helmets in both sparse and dense traffic. This cascade of CNNs shares common representation in order to avoid extra computation and over-fitting. The accidents of the vehicles are modeled as an unusual incident. The deep representation is extracted using denoising stacked auto-encoders trained from the spatio-temporal video volumes of normal traffic videos. The possibility of an accident is determined based on the reconstruction error and the likelihood of the deep representation. For the likelihood of the deep representation, an unsupervised model is trained using one class SVM. Also, the intersection points of the vehicle’s trajectories are used to reduce the false alarm rate and increase the reliability of the overall system. Both the approaches are evaluated on the real traffic videos collected from the video surveillance network of Hyderabad city in India. The experiments on the real traffic videos demonstrate the efficacy of the proposed approache

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of “volunteer mappers”. Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protection

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Correcting inter-sectional accuracy differences in drowsiness detection systems using generative adversarial networks (GANs)

    Get PDF
    Doctoral Degrees. University of KwaZulu-Natal, Durban.oad accidents contribute to many injuries and deaths among the human population. There is substantial evidence that proves drowsiness is one of the most prominent causes of road accidents all over the world. This results in fatalities and severe injuries for drivers, passengers, and pedestrians. These alarming facts are raising the interest in equipping vehicles with robust driver drowsiness detection systems to minimise accident rates. One of the primary concerns of motor industries is the safety of passengers and as a consequence they have invested significantly in research and development to equip vehicles with systems that can help minimise to road accidents. A number research endeavours have attempted to use Artificial intelligence, and particularly Deep Neural Networks (DNN), to build intelligent systems that can detect drowsiness automatically. However, datasets are crucial when training a DNN. When datasets are unrepresentative, trained models are prone to bias because they are unable to generalise. This is particularly problematic for models trained in specific cultural contexts, which may not represent a wide range of races, and thus fail to generalise. This is a specific challenge for driver drowsiness detection task, where most publicly available datasets are unrepresentative as they cover only certain ethnicity groups. This thesis investigates the problem of an unrepresentative dataset in the training phase of Convolutional Neural Networks (CNNs) models. Firstly, CNNs are compared with several machine learning techniques to establish their superior suitability for the driver drowsiness detection task. An investigation into the implementation of CNNs was performed and highlighted that publicly available datasets such as NTHU, DROZY and CEW do not represent a wide spectrum of ethnicity groups and lead to biased systems. A population bias visualisation technique was proposed to help identify the regions, or individuals where a model is failing to generalise on a picture grid. Furthermore, the use of Generative Adversarial Networks (GANs) with lightweight convolutions called Depthwise Separable Convolutions (DSC) for image translation to multi-domain outputs was investigated in an attempt to generate synthetic datasets. This thesis further showed that GANs can be used to generate more realistic images with varied facial attributes for predicting drowsiness across multiple ethnicity groups. Lastly, a novel framework was developed to detect bias and correct it using synthetic generated images which are produced by GANs. Training models using this framework results in a substantial performance boost

    An Investigation into Trust and Reputation Frameworks for Autonomous Underwater Vehicles

    Get PDF
    As Autonomous Underwater Vehicles (AUVs) become more technically capable and economically feasible, they are being increasingly used in a great many areas of defence, commercial and environmental applications. These applications are tending towards using independent, autonomous, ad-hoc, collaborative behaviour of teams or fleets of these AUV platforms. This convergence of research experiences in the Underwater Acoustic Network (UAN) and Mobile Ad-hoc Network (MANET) fields, along with the increasing Level of Automation (LOA) of such platforms, creates unique challenges to secure the operation and communication of these networks. The question of security and reliability of operation in networked systems has usually been resolved by having a centralised coordinating agent to manage shared secrets and monitor for misbehaviour. However, in the sparse, noisy and constrained communications environment of UANs, the communications overheads and single-point-of-failure risk of this model is challenged (particularly when faced with capable attackers). As such, more lightweight, distributed, experience based systems of “Trust” have been proposed to dynamically model and evaluate the “trustworthiness” of nodes within a MANET across the network to prevent or isolate the impact of malicious, selfish, or faulty misbehaviour. Previously, these models have monitored actions purely within the communications domain. Moreover, the vast majority rely on only one type of observation (metric) to evaluate trust; successful packet forwarding. In these cases, motivated actors may use this limited scope of observation to either perform unfairly without repercussions in other domains/metrics, or to make another, fair, node appear to be operating unfairly. This thesis is primarily concerned with the use of terrestrial-MANET trust frameworks to the UAN space. Considering the massive theoretical and practical difference in the communications environment, these frameworks must be reassessed for suitability to the marine realm. We find that current single-metric Trust Management Frameworks (TMFs) do not perform well in a best-case scaling of the marine network, due to sparse and noisy observation metrics, and while basic multi-metric communications-only frameworks perform better than their single-metric forms, this performance is still not at a reliable level. We propose, demonstrate (through simulation) and integrate the use of physical observational metrics for trust assessment, in tandem with metrics from the communications realm, improving the safety, security, reliability and integrity of autonomous UANs. Three main novelties are demonstrated in this work: Trust evaluation using metrics from the physical domain (movement/distribution/etc.), demonstration of the failings of Communications-based Trust evaluation in sparse, noisy, delayful and non-linear UAN environments, and the deployment of trust assessment across multiple domains, e.g. the physical and communications domains. The latter contribution includes the generation and optimisation of cross-domain metric composition or“synthetic domains” as a performance improvement method

    Internet of Things Applications - From Research and Innovation to Market Deployment

    Get PDF
    The book aims to provide a broad overview of various topics of Internet of Things from the research, innovation and development priorities to enabling technologies, nanoelectronics, cyber physical systems, architecture, interoperability and industrial applications. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from technology to international cooperation and the global "state of play".The book builds on the ideas put forward by the European research Cluster on the Internet of Things Strategic Research Agenda and presents global views and state of the art results on the challenges facing the research, development and deployment of IoT at the global level. Internet of Things is creating a revolutionary new paradigm, with opportunities in every industry from Health Care, Pharmaceuticals, Food and Beverage, Agriculture, Computer, Electronics Telecommunications, Automotive, Aeronautics, Transportation Energy and Retail to apply the massive potential of the IoT to achieving real-world solutions. The beneficiaries will include as well semiconductor companies, device and product companies, infrastructure software companies, application software companies, consulting companies, telecommunication and cloud service providers. IoT will create new revenues annually for these stakeholders, and potentially create substantial market share shakeups due to increased technology competition. The IoT will fuel technology innovation by creating the means for machines to communicate many different types of information with one another while contributing in the increased value of information created by the number of interconnections among things and the transformation of the processed information into knowledge shared into the Internet of Everything. The success of IoT depends strongly on enabling technology development, market acceptance and standardization, which provides interoperability, compatibility, reliability, and effective operations on a global scale. The connected devices are part of ecosystems connecting people, processes, data, and things which are communicating in the cloud using the increased storage and computing power and pushing for standardization of communication and metadata. In this context security, privacy, safety, trust have to be address by the product manufacturers through the life cycle of their products from design to the support processes. The IoT developments address the whole IoT spectrum - from devices at the edge to cloud and datacentres on the backend and everything in between, through ecosystems are created by industry, research and application stakeholders that enable real-world use cases to accelerate the Internet of Things and establish open interoperability standards and common architectures for IoT solutions. Enabling technologies such as nanoelectronics, sensors/actuators, cyber-physical systems, intelligent device management, smart gateways, telematics, smart network infrastructure, cloud computing and software technologies will create new products, new services, new interfaces by creating smart environments and smart spaces with applications ranging from Smart Cities, smart transport, buildings, energy, grid, to smart health and life. Technical topics discussed in the book include: • Introduction• Internet of Things Strategic Research and Innovation Agenda• Internet of Things in the industrial context: Time for deployment.• Integration of heterogeneous smart objects, applications and services• Evolution from device to semantic and business interoperability• Software define and virtualization of network resources• Innovation through interoperability and standardisation when everything is connected anytime at anyplace• Dynamic context-aware scalable and trust-based IoT Security, Privacy framework• Federated Cloud service management and the Internet of Things• Internet of Things Application

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase
    corecore