52 research outputs found

    An Approach to Guide Users Towards Less Revealing Internet Browsers

    Get PDF
    When browsing the Internet, HTTP headers enable both clients and servers send extra data in their requests or responses such as the User-Agent string. This string contains information related to the sender’s device, browser, and operating system. Previous research has shown that there are numerous privacy and security risks result from exposing sensitive information in the User-Agent string. For example, it enables device and browser fingerprinting and user tracking and identification. Our large analysis of thousands of User-Agent strings shows that browsers differ tremendously in the amount of information they include in their User-Agent strings. As such, our work aims at guiding users towards using less exposing browsers. In doing so, we propose to assign an exposure score to browsers based on the information they expose and vulnerability records. Thus, our contribution in this work is as follows: first, provide a full implementation that is ready to be deployed and used by users. Second, conduct a user study to identify the effectiveness and limitations of our proposed approach. Our implementation is based on using more than 52 thousand unique browsers. Our performance and validation analysis show that our solution is accurate and efficient. The source code and data set are publicly available and the solution has been deployed

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    A Mathematical Framework of Human Thought Process: Rectifying Software Construction Inefficiency and Identifying Characteristic Efficiencies of Networked Systems Via Problem-solution Cycle

    Get PDF
    Problem The lack of a theory to explain human thought process latently affects the general perception of problem solving activities. This present study was to theorize human thought process (HTP) to ascertain in general the effect of problem solving inadequacy on efficiency. Method To theorize human thought process (HTP), basic human problem solving activities were investigated through the vein of problem-solution cycle (PSC). The scope of PSC investigation was focused on the inefficiency problem in software construction and latent characteristic efficiencies of a similar networked system. In order to analyze said PSC activities, three mathematical quotients and a messaging wavefunction model similar to Schrodinger’s electronic wavefunction model are respectively derived for four intrinsic brain traits namely intelligence, imagination, creativity and language. These were substantiated using appropriate empirical verifications. Firstly, statistical analysis of intelligence, imagination and creativity quotients was done using empirical data with global statistical views from: 1. 1994–2004 CHAOS report Standish Group International’s software development projects success and failure survey. 2. 2000–2009 Global Creativity Index (GCI) data based on 3Ts of economic development (technology, talent and tolerance indices) from 82 nations. 3. Other varied localized success and failure surveys from 1994–2009/1998–2010 respectively. These statistical analyses were done using spliced decision Sperner system (SDSS) to show that the averages of all empirical scientific data on successes and failures of software production within specified periods are in excellent agreement with theoretically derived values. Further, the catalytic effect of creativity (thought catalysis) in human thought process is outlined and shown to be in agreement with newly discovered branch-like nerve cells in brain of mice (similar to human brain). Secondly, the networked communication activities of the language trait during PSC was scrutinized statistical using journal-journal citation data from 13 randomly selected 1984 major chemistry journals. With the aid of aforementioned messaging wave formulation, computer simulation of message-phase “thermogram” and “chromatogram” were generated to provide messaging line spectra relative to the behavioral messaging activities of the messaging network under study. Results Theoretical computations stipulated 66.67% efficiency due to intelligence, imagination and creativity traits interactions (multi-computational skills) was 33.33% due to networked linkages of language trait (aggregated language skills). The worldwide software production and economic data used were normally distributed with significance level α of 0.005. Thus, there existed a permissible error of 1% attributed to the significance level of said normally distributed data. Of the brain traits quotient statistics, the imagination quotient (IMGQ) score was 52.53% from 1994-2004 CHAOS data analysis and that from 2010 GCI data was 54.55%. Their average reasonably approximated 50th percentile of the cumulative distribution of problem-solving skills. On the other hand, the creativity quotient score from 1994-2004 CHAOS data was 0.99% and that from 2010 GCI data was 1.17%. These averaged to a near 1%. The chances of creativity and intelligence working together as joint problem-solving skills was consistently found to average at 11.32%(1994-2004 CHAOS: 10.95%, 2010 GCI: 11.68%). Also, the empirical data analysis showed that the language inefficiency of thought flow ηʹ(τ) from 1994-2004 CHAOS data was 35.0977% and that for 2010 GCI data was 34.9482%. These averaged around 35%. On the success and failure of software production, statistical analysis of empirical data showed 63.2% average efficiency for successful software production (1994 - 2012) and 33.94% average inefficiency for failed software production (1998 - 2010). On the whole, software production projects had a bound efficiency approach level (BEAL) of 94.8%. In the messaging wave analysis of 13 journal-to-journal citations, the messaging phase space graph(s) indicated a fundamental frequency (probable minimum message state) of 11. Conclusions By comparison, using cutoff level of printed editions of Journal Citation Reports to substitute for missing data values is inappropriate. However, values from optimizing method(s) harmonized with the fundamental frequency inferred from message wave analysis using informatics wave equation analysis (IWEA). Due to its evenly spaced chronological data snapshot, the application of SDSS technique inherently does diminish the difficulty associated with handling large data volume (big data) for analysis. From CHAOS and GCI data analysis, the averaged CRTQ scores indicate that only 1 percent (on the average) of the entire human race can be considered exceptionally creative. However in the art of software production, the siphoning effect of existing latent language inefficiency suffocates its processes of solution creation to an efficiency bound level of 66.67%. With a BEAL value of 94.8% and basic human error of 5.2%, it can be reasonable said that software production projects have delivered efficiently within existing latent inefficiency. Consequently, by inference from the average language inefficiency of thought flow, an average language efficiency of 65% exists in the process of software production worldwide. Reasonably, this correlates very strongly with existing average software production efficiency of 63.2% around which software crisis has averagely stagnated since the inception of software creation. The persistent dismal performance of software production is attributable to existing central focus on the usage of multiplicity of programming languages. Acting as an “efficiency buffer”, the latter minimizes changes to efficiency in software production thereby limiting software production efficiency theoretically to 66.67%. From both theoretical and empirical perspective, this latently shrouds software production in a deficit maximum attainable efficiency (DMAE). Software crisis can only be improved drastically through policy-driven adaptation of a universal standard supporting very minimal number of programming languages. On the average, the proposed universal standardization could save the world an estimated 6 trillion US dollars per year which is lost through existing inefficient software industry

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Micro-, Meso- and Macro-Connectomics of the Brain

    Get PDF
    Neurosciences, Neurolog
    corecore