509 research outputs found

    Real-time crash prediction models: State-of-the-art, design pathways and ubiquitous requirements

    Get PDF
    Proactive traffic safety management systems can monitor traffic conditions in real-time, identify the formation of unsafe traffic dynamics, and implement suitable interventions to bring unsafe conditions back to normal traffic situations. Recent advancements in artificial intelligence, sensor fusion and algorithms have brought about the introduction of a proactive safety management system closer to reality. The basic prerequisite for developing such a system is to have a reliable crash prediction model that takes real-time traffic data as input and evaluates their association with crash risk. Since the early 21st century, several studies have focused on developing such models. Although the idea has considerably matured over time, the endeavours have been quite discrete and fragmented at best because the fundamental aspects of the overall modelling approach substantially vary. Therefore, a number of transitional challenges have to be identified and subsequently addressed before a ubiquitous proactive safety management system can be formulated, designed and implemented in real-world scenarios. This manuscript conducts a comprehensive review of existing real-time crash prediction models with the aim of illustrating the state-of-the-art and systematically synthesizing the thoughts presented in existing studies in order to facilitate its translation from an idea into a ready to use technology. Towards that journey, it conducts a systematic review by applying various text mining methods and topic modelling. Based on the findings, this paper ascertains the development pathways followed in various studies, formulates the ubiquitous design requirements of such models from existing studies and knowledge of similar systems. Finally, this study evaluates the universality and design compatibility of existing models. This paper is, therefore, expected to serve as a one stop knowledge source for facilitating a faster transition from the idea of real-time crash prediction models to a real-world operational proactive traffic safety management system

    Applying Machine Learning Techniques to Improve Safety and Mobility of Urban Transportation Systems Using Infrastructure- and Vehicle-Based Sensors

    Get PDF
    The importance of sensing technologies in the field of transportation is ever increasing. Rapid improvements of cloud computing, Internet of Vehicles (IoV), and intelligent transport system (ITS) enables fast acquisition of sensor data with immediate processing. Machine learning algorithms provide a way to classify or predict outcomes in a selective and timely fashion. High accuracy and increased volatility are the main features of various learning algorithms. In this dissertation, we aim to use infrastructure- and vehicle-based sensors to improve safety and mobility of urban transportation systems. Smartphone sensors were used in the first study to estimate vehicle trajectory using lane change classification. It addresses the research gap in trajectory estimation since all previous studies focused on estimating trajectories at roadway segments only. Being a mobile application-based system, it can readily be used as on-board unit emulators in vehicles that have little or no connectivity. Secondly, smartphone sensors were also used to identify several transportation modes. While this has been studied extensively in the last decade, our method integrates a data augmentation method to overcome the class imbalance problem. Results show that using a balanced dataset improves the classification accuracy of transportation modes. Thirdly, infrastructure-based sensors like the loop detectors and video detectors were used to predict traffic signal states. This system can aid in resolving the complex signal retiming steps that is conventionally used to improve the performance of an intersection. The methodology was transferred to a different intersection where excellent results were achieved. Fourthly, magnetic vehicle detection system (MVDS) was used to generate traffic patterns in crash and non-crash events. Variational Autoencoder was used for the first time in this study as a data generation tool. The results related to sensitivity and specificity were improved by up to 8% as compared to other state-of-the-art data augmentation methods

    A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities

    Get PDF
    The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Improving bicycle helmet research: examining intervention studies and parental experiences

    Get PDF
    Despite the recognized protection provided by bicycle helmets, estimates indicate that only 25% of people wear one every time they ride. Although much research has focused on identifying determinants of bicycle helmet use, there has been limited success for increasing and sustaining children's bicycle helmet use. One potential reason for this is a limited understanding of how identified determinants of helmet use work together to impact behavior. The goal of this dissertation was to improve research and practice around children's bicycle helmet use to further an aim of ultimately reducing the number of head injuries among children. To accomplish this goal this dissertation is divided into two separate but related products that address critical issues in the field. The first product is a focused literature review on interventions designed to increase children's helmet use and the other is a qualitative study of parental perceptions of and experiences with children's bicycle riding. The aim of the focused literature review was to gain a better understanding of the bicycle helmet use research by identifying gaps in bicycle helmet intervention methodology and to recommend opportunities to strengthen the field. Identifying gaps in intervention research allows for recommendations that can have a direct impact on future interventions. Inclusion criteria included: articles published in English between 1986-2011 that focus on children under 18 years old, report on an intervention or the evaluation of an intervention, and have increased helmet use as one of the main outcomes. Thirty-five studies were included in the review. Findings indicated opportunities for improvement in three broad areas: measurement issues, group differences, and analytic techniques. Recommendations for increasing the accuracy of measurements, examining group differences and differential intervention effects, and the use of sophisticated analytic techniques to account for the data structure and identifying influential contextual variables were provided. The goal of the qualitative study was to develop a model that described processes associated with children's bicycle helmet use across intrapersonal, interpersonal, community, institutional, and political contexts. The aim was to gain an understanding of how parents assess and manage risks associated with their children's bicycle riding. Using a constructivist grounded theory approach; interviews with parents of children in 3rd - 5th grades were conducted. Interviews covering children's bike riding history and current habits were recorded and transcribed verbatim. Using a constant comparative approach, data were analyzed concurrent with data collection. Initial coding identified critical issues in the data and focused coding was used to further identify specific patterns of behavior. Theoretical sampling was then used to fully develop the categories that emerged. Theoretical coding also described how categories related to one another. A model emerged from the data that explained the cognitive and behavioral processes parents utilized to balance their anxiety around perceived dangers of bike-riding with their understanding of their children's developmental needs for autonomy. Findings also showed parents' primary concerns focused around more improbable risks (such as child-snatching) rather than higher probability risks such as falling and head injuries. Implications are discussed in terms of expanding theoretical foundations of intervention design and addressing parental concerns prior to introducing helmet use information. With refinement, findings from this dissertation study may be used to develop interventions to increase sustainable bicycle helmet use and reduce bicycle-related head injuries in children

    Denial of Service in Web-Domains: Building Defenses Against Next-Generation Attack Behavior

    Get PDF
    The existing state-of-the-art in the field of application layer Distributed Denial of Service (DDoS) protection is generally designed, and thus effective, only for static web domains. To the best of our knowledge, our work is the first that studies the problem of application layer DDoS defense in web domains of dynamic content and organization, and for next-generation bot behaviour. In the first part of this thesis, we focus on the following research tasks: 1) we identify the main weaknesses of the existing application-layer anti-DDoS solutions as proposed in research literature and in the industry, 2) we obtain a comprehensive picture of the current-day as well as the next-generation application-layer attack behaviour and 3) we propose novel techniques, based on a multidisciplinary approach that combines offline machine learning algorithms and statistical analysis, for detection of suspicious web visitors in static web domains. Then, in the second part of the thesis, we propose and evaluate a novel anti-DDoS system that detects a broad range of application-layer DDoS attacks, both in static and dynamic web domains, through the use of advanced techniques of data mining. The key advantage of our system relative to other systems that resort to the use of challenge-response tests (such as CAPTCHAs) in combating malicious bots is that our system minimizes the number of these tests that are presented to valid human visitors while succeeding in preventing most malicious attackers from accessing the web site. The results of the experimental evaluation of the proposed system demonstrate effective detection of current and future variants of application layer DDoS attacks

    5th International Conference on Advanced Research Methods and Analytics (CARMA 2023)

    Full text link
    Research methods in economics and social sciences are evolving with the increasing availability of Internet and Big Data sources of information. As these sources, methods, and applications become more interdisciplinary, the 5th International Conference on Advanced Research Methods and Analytics (CARMA) is a forum for researchers and practitioners to exchange ideas and advances on how emerging research methods and sources are applied to different fields of social sciences as well as to discuss current and future challenges.Martínez Torres, MDR.; Toral Marín, S. (2023). 5th International Conference on Advanced Research Methods and Analytics (CARMA 2023). Editorial Universitat Politècnica de València. https://doi.org/10.4995/CARMA2023.2023.1700

    Synthetic Worlds for Improving Driver Assistance Systems

    Get PDF
    The automotive industry is evolving at a rapid pace, new technologies and techniques are being introduced in order to make the driving experience more pleasant and safer as compared to a few decades ago. But as with any new technology and methodology, there will always be new challenges to overcome. Advanced Driver Assistance systems has attracted a considerable amount of interest in the research community over the past few decades. This research dives into greater depths of how synthetic world simulations can be used to train the next generation of Advanced Driver Assistance Systems in order to detect and alert the driver of any possible risks and dangers during autonomous driving sessions. As Autonomous driving is still in the process of rolling out, we are far away from the point where Cars can truly be autonomous in any given environment and scenario and there are still quite a fair number of challenges to overcome. A number of semi autonomous cars are already on the road for a number of years. These include likes of Tesla, BMW \& Mercedes. But even more recently some of these cars have been involved in accidents which could have been avoided if a driver had control of the vehicle instead of the autonomous systems. This raises the question why are these cars of the future so prone to accidents and whats the best way to over come this problem. The answer lies in the use of synthetic worlds for designing more efficient ADAS in the least amount of time for the automobile of the future. This thesis explores a number of research areas starting from the development of an open source driver simulator that when compared to the state-of-the art is cheaper and efficient to deploy at almost any location. A typical driver simulator can cost between £10,000 to as much as £500,000. Our approach has brought this cost down to less than £2,000 while providing the same visual fidelity and accuracy of the more expensive simulators in the market. On the hardware side, our simulator consist of only 4 main components namely, CPU case, monitors Steering/pedal and webcams. This allows the simulator to be shipped to any location without the need of any complicated setup. When compared to other state-of-the-art simulators \cite{carla}, the setup and programming time is quite low, if a PRT based setup requires 10 days on state-of-the-art simulators then the same aspect can be programmed on our simulator in as little as 15 minutes as the simulator is designed from the ground up to be able to record accurate PRT. The simulator is then successfully used to record accurate Perception Reaction Times among 40 subjects under different driving conditions. The results highlight the fact that not all secondary tasks result in higher reaction times. Moreover, the overall reaction times for hands were recorded at 3.51 seconds whereas the feet were recorded at 2.47 seconds. The study highlights the importance of mental workloads during autonomous driving which is a vastly important aspect for designing ADAS. The novelty from this study resulted in the generation of a new dataset comprising of 1.44 million images targeted at driver vehicular interactions that can be used by researchers and engineers to develop advanced driver assistance systems. The simulator is then further modified to generate hi fidelity weather simulations which when compared to simulators like CARLA provide more control over how the cloud formations giving the researchers more variables to test during simulations and image generation. The resulting synthetic weather dataset called Weather Drive Dataset is unique and novel in nature as its the largest synthetic weather dataset currently available to researchers comprising of 108,333 images with varying weather conditions. Most of the state-of-the-art datasets only have non automotive based images or is not synthetic at all. The proposed dataset has been evaluated against Berkeley Deep Drive dataset which resulted in 74\% accuracy. This proved that synthetic nature of datasets are valid in training the next generation of vision based weather classifiers for autonomous driving. The studies performed will prove to be vital in progressing the Advanced Driver Assistance systems research forward in a number of different ways. The experiments take into account the necessary state of the art methods to compare and differentiate between the proposed methodologies. Most efficient approaches and best practices are also explained in detail which can provide the necessary support to other researchers to set up similar systems to aid in designing synthetic simulations for other research areas

    Advanced analytical methods for fraud detection: a systematic literature review

    Get PDF
    The developments of the digital era demand new ways of producing goods and rendering services. This fast-paced evolution in the companies implies a new approach from the auditors, who must keep up with the constant transformation. With the dynamic dimensions of data, it is important to seize the opportunity to add value to the companies. The need to apply more robust methods to detect fraud is evident. In this thesis the use of advanced analytical methods for fraud detection will be investigated, through the analysis of the existent literature on this topic. Both a systematic review of the literature and a bibliometric approach will be applied to the most appropriate database to measure the scientific production and current trends. This study intends to contribute to the academic research that have been conducted, in order to centralize the existing information on this topic

    How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review

    Full text link
    Context: Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called 'safety-critical' systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches. Objective: This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question 'How to Certify Machine Learning Based Safety-critical Systems?'. Method: We conduct a Systematic Literature Review (SLR) of research papers published between 2015 to 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct Certification. We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted. Results: The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mention main pillars that are for now mainly studied separately. Conclusion: We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.Comment: 60 pages (92 pages with references and complements), submitted to a journal (Automated Software Engineering). Changes: Emphasizing difference traditional software engineering / ML approach. Adding Related Works, Threats to Validity and Complementary Materials. Adding a table listing papers reference for each section/subsection
    corecore