19,312 research outputs found

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    FixMiner: Mining Relevant Fix Patterns for Automated Program Repair

    Get PDF
    Patching is a common activity in software development. It is generally performed on a source code base to address bugs or add new functionalities. In this context, given the recurrence of bugs across projects, the associated similar patches can be leveraged to extract generic fix actions. While the literature includes various approaches leveraging similarity among patches to guide program repair, these approaches often do not yield fix patterns that are tractable and reusable as actionable input to APR systems. In this paper, we propose a systematic and automated approach to mining relevant and actionable fix patterns based on an iterative clustering strategy applied to atomic changes within patches. The goal of FixMiner is thus to infer separate and reusable fix patterns that can be leveraged in other patch generation systems. Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree structure of the edit scripts that captures the AST-level context of the code changes. FixMiner uses different tree representations of Rich Edit Scripts for each round of clustering to identify similar changes. These are abstract syntax trees, edit actions trees, and code context trees. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in Rich Edit Scripts. We further integrated the mined patterns to an automated program repair prototype, PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 81% of PARFixMiner's generated plausible patches are correct.Comment: 31 pages, 11 figure

    Impacts and Detection of Design Smells

    Full text link
    Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.Changes are continuously made in the source code to take into account the needs of the customers and fix the faults. Continuous change can lead to antipatterns and code smells, collectively called “design smells” to occur in the source code. Design smells are poor solutions to recurring design or implementation problems, typically in object-oriented development. During comprehension and changes activities and due to the time-to-market, lack of understanding, and the developers’ experience, developers cannot always follow standard designing and coding techniques, i.e., design patterns. Consequently, they introduce design smells in their systems. In the literature, several authors claimed that design smells make object-oriented software systems more difficult to understand, more fault-prone, and harder to change than systems without such design smells. Yet, few of these authors empirically investigate the impact of design smells on software understandability and none of them authors studied the impact of design smells on developers’ effort. In this thesis, we propose three principal contributions. The first contribution is an empirical study to bring evidence of the impact of design smells on comprehension and change. We design and conduct two experiments with 59 subjects, to assess the impact of the composition of two Blob or two Spaghetti Code on the performance of developers performing comprehension and change tasks. We measure developers’ performance using: (1) the NASA task load index for their effort; (2) the time that they spent performing their tasks; and, (3) their percentages of correct answers. The results of the two experiments showed that two occurrences of Blob or Spaghetti Code design smells impedes significantly developers performance during comprehension and change tasks. The obtained results justify a posteriori previous researches on the specification and detection of design smells. Software development teams should warn developers against high number of occurrences of design smells and recommend refactorings at each step of the development to remove them when possible. In the second contribution, we investigate the relation between design smells and faults in classes from the point of view of developers who must fix faults. We study the impact of the presence of design smells on the effort required to fix faults, which we measure using three metrics: (1) the duration of the fixing period; (2) the number of fields and methods impacted by fault-fixes; and, (3) the entropy of the fault-fixes in the source code. We conduct an empirical study with 12 design smells detected in 54 releases of four systems: ArgoUML, Eclipse, Mylyn, and Rhino. Our results showed that the duration of the fixing period is longer for faults involving classes with design smells. Also, fixing faults in classes with design smells impacts more files, more fields, and more methods. We also observed that after a fault is fixed, the number of occurrences of design smells in the classes involved in the fault decreases. Understanding the impact of design smells on development effort is important to help development teams better assess and forecast the impact of their design decisions and therefore lead their effort to improve the quality of their software systems. Development teams should monitor and remove design smells from their software systems because they are likely to increase the change efforts. The third contribution concerns design smells detection. During maintenance and evolution tasks, it is important to have a tool able to detect design smells incrementally and iteratively. This incremental and iterative detection process could reduce costs, effort, and resources by allowing practitioners to identify and take into account occurrences of design smells as they find them during comprehension and change. Researchers have proposed approaches to detect occurrences of design smells but these approaches have currently four limitations: (1) they require extensive knowledge of design smells; (2) they have limited precision and recall; (3) they are not incremental; and (4) they cannot be applied on subsets of systems. To overcome these limitations, we introduce SMURF, a novel approach to detect design smells, based on a machine learning technique—support vector machines—and taking into account practitioners’ feedback. Through an empirical study involving three systems and four design smells, we showed that the accuracy of SMURF is greater than that of DETEX and BDTEX when detecting design smells occurrences. We also showed that SMURF can be applied in both intra-system and inter-system configurations. Finally, we reported that SMURF accuracy improves when using practitioners’ feedback

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles

    Get PDF
    With the further development of automated driving, the functional performance increases resulting in the need for new and comprehensive testing concepts. This doctoral work aims to enable the transition from quantitative mileage to qualitative test coverage by aggregating the results of both knowledge-based and data-driven test platforms. The validity of the test domain can be extended cost-effectively throughout the software development process to achieve meaningful test termination criteria

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles - Technological and Methodical Approaches

    Get PDF
    Fahrerassistenzsysteme sowie automatisiertes Fahren leisten einen wesentlichen Beitrag zur Verbesserung der Verkehrssicherheit von Kraftfahrzeugen, insbesondere von Nutzfahrzeugen. Mit der Weiterentwicklung des automatisierten Fahrens steigt hierbei die funktionale Leistungsfähigkeit, woraus Anforderungen an neue, gesamtheitliche Erprobungskonzepte entstehen. Um die Absicherung höherer Stufen von automatisierten Fahrfunktionen zu garantieren, sind neuartige Verifikations- und Validierungsmethoden erforderlich. Ziel dieser Arbeit ist es, durch die Aggregation von Testergebnissen aus wissensbasierten und datengetriebenen Testplattformen den Übergang von einer quantitativen Kilometerzahl zu einer qualitativen Testabdeckung zu ermöglichen. Die adaptive Testabdeckung zielt somit auf einen Kompromiss zwischen Effizienz- und Effektivitätskriterien für die Absicherung von automatisierten Fahrfunktionen in der Produktentstehung von Nutzfahrzeugen ab. Diese Arbeit umfasst die Konzeption und Implementierung eines modularen Frameworks zur kundenorientierten Absicherung automatisierter Fahrfunktionen mit vertretbarem Aufwand. Ausgehend vom Konfliktmanagement für die Anforderungen der Teststrategie werden hochautomatisierte Testansätze entwickelt. Dementsprechend wird jeder Testansatz mit seinen jeweiligen Testzielen integriert, um die Basis eines kontextgesteuerten Testkonzepts zu realisieren. Die wesentlichen Beiträge dieser Arbeit befassen sich mit vier Schwerpunkten: * Zunächst wird ein Co-Simulationsansatz präsentiert, mit dem sich die Sensoreingänge in einem Hardware-in-the-Loop-Prüfstand mithilfe synthetischer Fahrszenarien simulieren und/ oder stimulieren lassen. Der vorgestellte Aufbau bietet einen phänomenologischen Modellierungsansatz, um einen Kompromiss zwischen der Modellgranularität und dem Rechenaufwand der Echtzeitsimulation zu erreichen. Diese Methode wird für eine modulare Integration von Simulationskomponenten, wie Verkehrssimulation und Fahrdynamik, verwendet, um relevante Phänomene in kritischen Fahrszenarien zu modellieren. * Danach wird ein Messtechnik- und Datenanalysekonzept für die weltweite Absicherung von automatisierten Fahrfunktionen vorgestellt, welches eine Skalierbarkeit zur Aufzeichnung von Fahrzeugsensor- und/ oder Umfeldsensordaten von spezifischen Fahrereignissen einerseits und permanenten Daten zur statistischen Absicherung und Softwareentwicklung andererseits erlaubt. Messdaten aus länderspezifischen Feldversuchen werden aufgezeichnet und zentral in einer Cloud-Datenbank gespeichert. * Anschließend wird ein ontologiebasierter Ansatz zur Integration einer komplementären Wissensquelle aus Feldbeobachtungen in ein Wissensmanagementsystem beschrieben. Die Gruppierung von Aufzeichnungen wird mittels einer ereignisbasierten Zeitreihenanalyse mit hierarchischer Clusterbildung und normalisierter Kreuzkorrelation realisiert. Aus dem extrahierten Cluster und seinem Parameterraum lassen sich die Eintrittswahrscheinlichkeit jedes logischen Szenarios und die Wahrscheinlichkeitsverteilungen der zugehörigen Parameter ableiten. Durch die Korrelationsanalyse von synthetischen und naturalistischen Fahrszenarien wird die anforderungsbasierte Testabdeckung adaptiv und systematisch durch ausführbare Szenario-Spezifikationen erweitert. * Schließlich wird eine prospektive Risikobewertung als invertiertes Konfidenzniveau der messbaren Sicherheit mithilfe von Sensitivitäts- und Zuverlässigkeitsanalysen durchgeführt. Der Versagensbereich kann im Parameterraum identifiziert werden, um die Versagenswahrscheinlichkeit für jedes extrahierte logische Szenario durch verschiedene Stichprobenverfahren, wie beispielsweise die Monte-Carlo-Simulation und Adaptive-Importance-Sampling, vorherzusagen. Dabei führt die geschätzte Wahrscheinlichkeit einer Sicherheitsverletzung für jedes gruppierte logische Szenario zu einer messbaren Sicherheitsvorhersage. Das vorgestellte Framework erlaubt es, die Lücke zwischen wissensbasierten und datengetriebenen Testplattformen zu schließen, um die Wissensbasis für die Abdeckung der Operational Design Domains konsequent zu erweitern. Zusammenfassend zeigen die Ergebnisse den Nutzen und die Herausforderungen des entwickelten Frameworks für messbare Sicherheit durch ein Vertrauensmaß der Risikobewertung. Dies ermöglicht eine kosteneffiziente Erweiterung der Validität der Testdomäne im gesamten Softwareentwicklungsprozess, um die erforderlichen Testabbruchkriterien zu erreichen

    Dynamic data flow testing

    Get PDF
    Data flow testing is a particular form of testing that identifies data flow relations as test objectives. Data flow testing has recently attracted new interest in the context of testing object oriented systems, since data flow information is well suited to capture relations among the object states, and can thus provide useful information for testing method interactions. Unfortunately, classic data flow testing, which is based on static analysis of the source code, fails to identify many important data flow relations due to the dynamic nature of object oriented systems. This thesis presents Dynamic Data Flow Testing, a technique which rethinks data flow testing to suit the testing of modern object oriented software. Dynamic Data Flow Testing stems from empirical evidence that we collect on the limits of classic data flow testing techniques. We investigate such limits by means of Dynamic Data Flow Analysis, a dynamic implementation of data flow analysis that computes sound data flow information on program traces. We compare data flow information collected with static analysis of the code with information observed dynamically on execution traces, and empirically observe that the data flow information computed with classic analysis of the source code misses a significant part of information that corresponds to relevant behaviors that shall be tested. In view of these results, we propose Dynamic Data Flow Testing. The technique promotes the synergies between dynamic analysis, static reasoning and test case generation for automatically extending a test suite with test cases that execute the complex state based interactions between objects. Dynamic Data Flow Testing computes precise data flow information of the program with Dynamic Data Flow Analysis, processes the dynamic information to infer new test objectives, which Dynamic Data Flow Testing uses to generate new test cases. The test cases generated by Dynamic Data Flow Testing exercise relevant behaviors that are otherwise missed by both the original test suite and test suites that satisfy classic data flow criteria

    The application of classical conditioning to the machine learning of a commonsense knowledge of visual events

    Get PDF
    In the field of artificial intelligence, possession of commonsense knowledge has long been considered to be a requirementto construct a machine that possesses artificial general intelligence. The conventional approach to providing this commonsense knowledge is to manually encode the required knowledge, a process that is both tedious and costly. After an analysis of classical conditioning, it was deemed that constructing a system based upon the stimulusstimulus interpretation of classical conditioning could allow for commonsense knowledge to be learned through a machine directly and passively observing its environment. Based upon these principles, a system was constructed that uses a stream of events, that have been observed within the environment, to learn rules regarding what event is likely to follow after the observation of another event. The system makes use of a feedback loop between three sub-systems: one that associates events that occur together, a second that accumulates evidence that a given association is significant and a third that recognises the significant associations. The recognition of past associations allows for both the creation of evidence for and against the existence of a particular association, and also allows for more complex associations to be created by treating instances of strongly associated event pairs to be themselves events. Testing the abilities of the system involved simulating the three different learning environments. The results found that measures of significance based on classical conditioning generally outperformed a probability-based measure. This thesis contributes a theory of how a stimulus-stimulus interpretation classical conditioning can be used to create commonsense knowledge and an observation that a significant sub-set of classical conditioning phenomena likely exist to aid in the elimination of noise. This thesis also represents a significant departure from existing reinforcement learning systems as the system presented in this thesis does not perform any form of action selection
    • …
    corecore