10,219 research outputs found

    IoT-Based Vehicle Monitoring and Driver Assistance System Framework for Safety and Smart Fleet Management

    Get PDF
    Curbing road accidents has always been one of the utmost priorities in every country. In Malaysia, Traffic Investigation and Enforcement Department reported that Malaysia’s total number of road accidents has increased from 373,071 to 533,875 in the last decade. One of the significant causes of road accidents is driver’s behaviours. However, drivers’ behaviour was challenging to regulate by the enforcement team or fleet operators, especially heavy vehicles. We proposed adopting the Internet of Things (IoT) and its’ emerging technologies to monitor and alert driver’s behavioural and driving patterns in reducing road accidents. In this work, we proposed a lane tracking and iris detection algorithm to monitor and alert the driver’s behaviour when the vehicle sways away from the lane and the driver feeling drowsy, respectively. We implemented electronic devices such as cameras, a global positioning system module, a global system communication module, and a microcontroller as an intelligent transportation system in the vehicle. We implemented face recognition for person identification using the same in-vehicle camera and recorded the working duration for authentication and operation health monitoring, respectively. With the GPS module, we monitored and alerted against permissible vehicle’s speed accordingly. We integrated IoT on the system for the fleet centre to monitor and alert the driver’s behavioural activities in real-time through the user access portal. We validated it successfully on Malaysian roads.  The outcome of this pilot project benefits the safety of drivers, public road users, and passengers. The impact of this framework leads to a new regulation by the government agencies towards merit and demerit system, real-time fleet monitoring of intelligent transportation systems, and socio-economy such as cheaper health premiums. The big data can be used to predict the driver’s behavioural in the future

    REMOVING THE MASK: VIDEO FINGERPRINTING ATTACKS OVER TOR

    Get PDF
    The Onion Router (Tor) is used by adversaries and warfighters alike to encrypt session information and gain anonymity on the internet. Since its creation in 2002, Tor has gained popularity by terrorist organizations, human traffickers, and illegal drug distributors who wish to use Tor services to mask their identity while engaging in illegal activities. Fingerprinting attacks assist in thwarting these attempts. Website fingerprinting (WF) attacks have been proven successful at linking a user to the website they have viewed over an encrypted Tor connection. With consumer video streaming traffic making up a large majority of internet traffic and sites like YouTube remaining in the top visited sites in the world, it is just as likely that adversaries are using videos to spread misinformation, illegal content, and terrorist propaganda. Video fingerprinting (VF) attacks look to use encrypted network traffic to predict the content of encrypted video sessions in closed- and open-world scenarios. This research builds upon an existing dataset of encrypted video session data and use statistical analysis to train a machine-learning classifier, using deep fingerprinting (DF), to predict videos viewed over Tor. DF is a machine learning technique that relies on the use of convolutional neural networks (CNN) and can be used to conduct VF attacks against Tor. By analyzing the results of these experiments, we can more accurately identify malicious video streaming activity over Tor.CivilianApproved for public release. Distribution is unlimited

    RPDP: An Efficient Data Placement based on Residual Performance for P2P Storage Systems

    Full text link
    Storage systems using Peer-to-Peer (P2P) architecture are an alternative to the traditional client-server systems. They offer better scalability and fault tolerance while at the same time eliminate the single point of failure. The nature of P2P storage systems (which consist of heterogeneous nodes) introduce however data placement challenges that create implementation trade-offs (e.g., between performance and scalability). Existing Kademlia-based DHT data placement method stores data at closest node, where the distance is measured by bit-wise XOR operation between data and a given node. This approach is highly scalable because it does not require global knowledge for placing data nor for the data retrieval. It does not however consider the heterogeneous performance of the nodes, which can result in imbalanced resource usage affecting the overall latency of the system. Other works implement criteria-based selection that addresses heterogeneity of nodes, however often cause subsequent data retrieval to require global knowledge of where the data stored. This paper introduces Residual Performance-based Data Placement (RPDP), a novel data placement method based on dynamic temporal residual performance of data nodes. RPDP places data to most appropriate selected nodes based on their throughput and latency with the aim to achieve lower overall latency by balancing data distribution with respect to the individual performance of nodes. RPDP relies on Kademlia-based DHT with modified data structure to allow data subsequently retrieved without the need of global knowledge. The experimental results indicate that RPDP reduces the overall latency of the baseline Kademlia-based P2P storage system (by 4.87%) and it also reduces the variance of latency among the nodes, with minimal impact to the data retrieval complexity

    Mathematical models to evaluate the impact of increasing serotype coverage in pneumococcal conjugate vaccines

    Get PDF
    Of over 100 serotypes of Streptococcus pneumoniae, only 7 were included in the first pneumo- coccal conjugate vaccine (PCV). While PCV reduced the disease incidence, in part because of a herd immunity effect, a replacement effect was observed whereby disease was increasingly caused by serotypes not included in the vaccine. Dynamic transmission models can account for these effects to describe post-vaccination scenarios, whereas economic evaluations can enable decision-makers to compare vaccines of increasing valency for implementation. This thesis has four aims. First, to explore the limitations and assumptions of published pneu- mococcal models and the implications for future vaccine formulation and policy. Second, to conduct a trend analysis assembling all the available evidence for serotype replacement in Europe, North America and Australia to characterise invasive pneumococcal disease (IPD) caused by vaccine-type (VT) and non-vaccine-types (NVT) serotypes. The motivation behind this is to assess the patterns of relative abundance in IPD cases pre- and post-vaccination, to examine country-level differences in relation to the vaccines employed over time since introduction, and to assess the growth of the replacement serotypes in comparison with the serotypes targeted by the vaccine. The third aim is to use a Bayesian framework to estimate serotype-specific invasiveness, i.e. the rate of invasive disease given carriage. This is useful for dynamic transmission modelling, as transmission is through carriage but a majority of serotype-specific pneumococcal data lies in active disease surveillance. This is also helpful to address whether serotype replacement reflects serotypes that are more invasive or whether serotypes in a specific location are equally more invasive than in other locations. Finally, the last aim of this thesis is to estimate the epidemiological and economic impact of increas- ing serotype coverage in PCVs using a dynamic transmission model. Together, the results highlight that though there are key parameter uncertainties that merit further exploration, divergence in serotype replacement and inconsistencies in invasiveness on a country-level may make a universal PCV suboptimal.Open Acces

    Beyond invisibility: The position and role of the literary translator in the digital paratextual space

    Get PDF
    This thesis presents a new theoretical framework through which to analyse the visibility of literary translators in the digital materials that present translations to readers, referred to throughout as paratextual spaces. Central to this model is the argument that paratextual ‘visibility’ must be understood as including both the way translators and their labour are presented to readers, defined here as their position, and also their role in the establishment of that position. Going beyond Lawrence Venuti’s concept of invisibility as an inevitably negative position to be fought against, this thesis instead establishes paratextual visibility as a complex negotiation between the agency of individual translators, the needs of a publishing house and the interests of readers. The value of this approach is demonstrated through a case study examining the visibility of translator Jamie Bulloch in the digital spaces surrounding his English-language translations of two novels by German author Timur Vermes: Look Who’s Back and The Hungry and the Fat. This analysis finds that even though Bulloch played an early role in creating the publisher’s paratextual materials, publisher MacLehose Press prioritised making the novels’ German origins and the foreignness of the texts visible over Bulloch’s status as the translator, or his translatorship. Bulloch’s limited visibility in the publisher-created materials was then reproduced in digital paratexts created by readers and third parties such as retailer Amazon, despite his attempts to interact with readers and perform his translatorship in digital spaces such as Twitter. Rather than challenging Bulloch’s limited visibility, then, digital spaces served to amplify it. This thesis therefore finds that the translator’s active participation in the promotion of their work does not always equate to increased visibility, thus demonstrating the need to go beyond Venuti’s invisibility and towards understanding the multifaceted roles played by translators in presenting literary texts to new audiences

    CITIES: Energetic Efficiency, Sustainability; Infrastructures, Energy and the Environment; Mobility and IoT; Governance and Citizenship

    Get PDF
    This book collects important contributions on smart cities. This book was created in collaboration with the ICSC-CITIES2020, held in San José (Costa Rica) in 2020. This book collects articles on: energetic efficiency and sustainability; infrastructures, energy and the environment; mobility and IoT; governance and citizenship

    Bibliographic Control in the Digital Ecosystem

    Get PDF
    With the contributions of international experts, the book aims to explore the new boundaries of universal bibliographic control. Bibliographic control is radically changing because the bibliographic universe is radically changing: resources, agents, technologies, standards and practices. Among the main topics addressed: library cooperation networks; legal deposit; national bibliographies; new tools and standards (IFLA LRM, RDA, BIBFRAME); authority control and new alliances (Wikidata, Wikibase, Identifiers); new ways of indexing resources (artificial intelligence); institutional repositories; new book supply chain; “discoverability” in the IIIF digital ecosystem; role of thesauri and ontologies in the digital ecosystem; bibliographic control and search engines

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
    corecore