71 research outputs found

    Detecting Traffic Conditions Model Based On Clustering Nodes Situations In VANET

    Get PDF
    In the last decade, cooperative vehicular network has been one of the most studied areas for developing the intelligent transportation systems (ITS). It is considered as an important approach to share the periodic traffic situations over vehicular ad hoc networks (VANETs) to improve efficiency and safety over the road. However, there are a number of issues in exchanging traffic data over high mobility of VANET, such as broadcast storms, hidden nodes and network instability. This paper proposes a new model to detect the traffic conditions using clustering traffic situations that are gathered from the nodes (vehicles) in VANET. The model designs new principles of multi-level clustering to detect the traffic condition for road users. Our model (a) divides the situations of vehicles into clusters, (b) designs a set of metrics to get the correlations among vehicles and (c) detects the traffic condition in certain areas. These metrics are simulated using the network simulator environment (NS-3) to study the effectiveness of the model

    Implementation of hybrid artificial intelligence technique to detect covert channels in new generation network protocol IPv6

    Get PDF
    Intrusion detection systems offer monolithic way to detect attacks through monitoring, searching for abnormal characteristics and malicious behavior in network communications. Cyber-attack is performed through using covert channel which currently, is one of the most sophisticated challenges facing network security systems. Covert channel is used to ex/infiltrate classified information from legitimate targets, consequently, this manipulation violates network security policy and privacy. The New Generation Internet Protocol version 6 (IPv6) has certain security vulnerabilities and need to be addressed using further advanced techniques. Fuzzy rule is implemented to classify different network attacks as an advanced machine learning technique, meanwhile, Genetic algorithm is considered as an optimization technique to obtain the ideal fuzzy rule. This paper suggests a novel hybrid covert channel detection system implementing two Artificial Intelligence (AI) techniques; Fuzzy Logic and Genetic Algorithm (FLGA) to gain sufficient and optimal detection rule against covert channel. Our approach counters sophisticated network unknown attacks through an advanced analysis of deep packet inspection. Results of our suggested system offer high detection rate of 97.7% and a better performance in comparison to previous tested techniques

    Participation Decisions and Measurement Error in Web Surveys

    Full text link
    This dissertation examines participation and question answering decisions in web surveys, in a unified framework. These decisions include whether the respondent will choose to start the survey, whether to continue or break off, whether to provide an answer to each survey question, and how much effort to exert in answering the questions. First, nonresponse is found to be highly correlated within a person; individuals who do not participate in one web survey are far more likely to also not participate in another web survey. Several survey-related factors are identified that further influence nonresponse. More surprisingly, independent survey requests are found to contribute to nonresponse—asking someone to complete a survey makes them less likely to agree to complete a later survey, unrelated to the first. Breakoff, however, seems to be more dependent on the survey than specific to a person. Breakoff is found to be higher on pages asking questions on sensitive behaviors and complex layout, particularly the use of grids. Interestingly, breakoff is found to be also higher on pages merely displaying section introductions, much like in telephone interviews. Finally, a series of experiments investigate the impact of placing multiple questions per page, and in grids. Greater measurement error is found when multiple questions are placed in a page, and even greater when they are also displayed in a grid. Evidence is found in support of a perceived burden hypothesis due to layout.Ph.D.Survey MethodologyUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/96497/1/Peytchev_dissertation.pd

    Effect of Interviewer Experience on Interview Pace and Interviewer Attitudes

    Full text link

    Enabling the use of a planning agent for urban traffic management via enriched and integrated urban data

    Get PDF
    Improving a city’s infrastructure is seen as a crucial part of its sustainability, leading to efficiencies and opportunities driven by technology integration. One significant step is to support the integration and enrichment of a broad variety of data, often using state of the art linked data approaches. Among the many advantages of such enrichment is that this may enable the use of intelligent processes to autonomously manage urban facilities such as traffic signal controls. In this paper we document an attempt to integrate sets of sensor and historical data using a data hub and a set of ontologies for the data. We argue that access to such high level integrated data sources leads to the enhancement of the capabilities of an urban transport operator. We demonstrate this by documenting the development of a planning agent which uses such data as inputs in the form of logic statements, and when given traffic goals to achieve, outputs complex traffic signal strategies which help transport operators deal with exceptional events such as road closures or road traffic saturation. The aim is to create an autonomous agent which reacts to commands from transport operators in the face of exceptional events involving saturated roads, and creates, executes and monitors plans to deal with the effects of such events. We evaluate the intelligent agent in a region of a large urban area, under the direction of urban transport operators

    How to survey displaced workers in Switzerland ? Sources of bias and ways around them

    Get PDF
    Studying career outcomes after job loss is challenging because individually displaced worker form a self-selected group. Indeed, the same factors causing the workers to lose their jobs, such as lack of motivation, may also reduce their re-employment prospects. Using data from plant closures where all workers were displaced irrespective of their individual characteristics offers a way around this selection bias. There is no systematic data collection on workers displaced by plant closure in Switzerland. Accordingly, we conducted our own survey on 1200 manufacturing workers who had lost their job 2 years earlier. The analysis of observational data gives rise to a set of methodological challenges, in particular nonresponse bias. Our survey addressed this issue by mixing data collection modes and repeating contact attempts. In addition, we combined the survey data with data from the public unemployment register to examine the extent of nonresponse bias. Our analysis suggests that some of our adjustments helped to reduce bias. Repeated contact attempts increased the response rate, but did not reduce nonresponse bias. In contrast, using telephone interviews in addition to paper questionnaires helped to substantially improve the participation of typically underrepresented subgroups. However, the survey respondents still differ from nonrespondents in terms of age, education and occupation. Interestingly, these differences have no significant impact on the substantial conclusion about displaced workers' re-employment prospects

    Methodological issues associated with collecting sensitive information over the telephone - experience from an Australian non-suicidal self-injury (NSSI) prevalence study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Collecting population data on sensitive issues such as non-suicidal self-injury (NSSI) is problematic. Case note audits or hospital/clinic based presentations only record severe cases and do not distinguish between suicidal and non-suicidal intent. Community surveys have largely been limited to school and university students, resulting in little much needed population-based data on NSSI. Collecting these data via a large scale population survey presents challenges to survey methodologists. This paper addresses the methodological issues associated with collecting this type of data via CATI.</p> <p>Methods</p> <p>An Australia-wide population survey was funded by the Australian Government to determine prevalence estimates of NSSI and associations, predictors, relationships to suicide attempts and suicide ideation, and outcomes. Computer assisted telephone interviewing (CATI) on a random sample of the Australian population aged 10+ years of age from randomly selected households, was undertaken.</p> <p>Results</p> <p>Overall, from 31,216 eligible households, 12,006 interviews were undertaken (response rate 38.5%). The 4-week prevalence of NSSI was 1.1% (95% ci 0.9-1.3%) and lifetime prevalence was 8.1% (95% ci 7.6-8.6).</p> <p>Methodological concerns and challenges in regard to collection of these data included extensive interviewer training and post interview counselling. Ethical considerations, especially with children as young as 10 years of age being asked sensitive questions, were addressed prior to data collection. The solution required a large amount of information to be sent to each selected household prior to the telephone interview which contributed to a lower than expected response rate. Non-coverage error caused by the population of interest being highly mobile, homeless or institutionalised was also a suspected issue in this low prevalence condition. In many circumstances the numbers missing from the sampling frame are small enough to not cause worry, especially when compared with the population as a whole, but within the population of interest to us, we believe that the most likely direction of bias is towards an underestimation of our prevalence estimates.</p> <p>Conclusion</p> <p>Collecting valid and reliable data is a paramount concern of health researchers and survey research methodologists. The challenge is to design cost-effective studies especially those associated with low-prevalence issues, and to balance time and convenience against validity, reliability, sampling, coverage, non-response and measurement error issues.</p
    corecore