19,312 research outputs found
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
In the past two decades, functional Magnetic Resonance Imaging has been used
to relate neuronal network activity to cognitive processing and behaviour.
Recently this approach has been augmented by algorithms that allow us to infer
causal links between component populations of neuronal networks. Multiple
inference procedures have been proposed to approach this research question but
so far, each method has limitations when it comes to establishing whole-brain
connectivity patterns. In this work, we discuss eight ways to infer causality
in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality,
Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and
Transfer Entropy. We finish with formulating some recommendations for the
future directions in this area
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people's lives, and are
clearly transforming the way we live and perceive technology. Today's
smartphones benefit from almost ubiquitous Internet connectivity and come
equipped with a plethora of inexpensive yet powerful embedded sensors, such as
accelerometer, gyroscope, microphone, and camera. This unique combination has
enabled revolutionary applications based on the mobile crowdsensing paradigm,
such as real-time road traffic monitoring, air and noise pollution, crime
control, and wildlife monitoring, just to name a few. Differently from prior
sensing paradigms, humans are now the primary actors of the sensing process,
since they become fundamental in retrieving reliable and up-to-date information
about the event being monitored. As humans may behave unreliably or
maliciously, assessing and guaranteeing Quality of Information (QoI) becomes
more important than ever. In this paper, we provide a new framework for
defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the
current state-of-the-art on the topic. We also outline novel research
challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN
FixMiner: Mining Relevant Fix Patterns for Automated Program Repair
Patching is a common activity in software development. It is generally
performed on a source code base to address bugs or add new functionalities. In
this context, given the recurrence of bugs across projects, the associated
similar patches can be leveraged to extract generic fix actions. While the
literature includes various approaches leveraging similarity among patches to
guide program repair, these approaches often do not yield fix patterns that are
tractable and reusable as actionable input to APR systems. In this paper, we
propose a systematic and automated approach to mining relevant and actionable
fix patterns based on an iterative clustering strategy applied to atomic
changes within patches. The goal of FixMiner is thus to infer separate and
reusable fix patterns that can be leveraged in other patch generation systems.
Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree
structure of the edit scripts that captures the AST-level context of the code
changes. FixMiner uses different tree representations of Rich Edit Scripts for
each round of clustering to identify similar changes. These are abstract syntax
trees, edit actions trees, and code context trees. We have evaluated FixMiner
on thousands of software patches collected from open source projects.
Preliminary results show that we are able to mine accurate patterns,
efficiently exploiting change information in Rich Edit Scripts. We further
integrated the mined patterns to an automated program repair prototype,
PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J
benchmark. Beyond this quantitative performance, we show that the mined fix
patterns are sufficiently relevant to produce patches with a high probability
of correctness: 81% of PARFixMiner's generated plausible patches are correct.Comment: 31 pages, 11 figure
Impacts and Detection of Design Smells
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux dĂŠfauts de code et de conception. Les dĂŠfauts de conception sont des mauvaises solutions Ă des problèmes rĂŠcurrents de conception ou dâimplĂŠmentation, gĂŠnĂŠralement dans le dĂŠveloppement orientĂŠ objet. Au cours des activitĂŠs de comprĂŠhension et de changement et en raison du temps dâaccès au marchĂŠ, du manque de comprĂŠhension, et de leur expĂŠrience, les dĂŠveloppeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par consĂŠquent, ils introduisent des dĂŠfauts de conception dans leurs systèmes. Dans la littĂŠrature, plusieurs auteurs ont fait valoir que les dĂŠfauts de conception rendent les systèmes orientĂŠs objet plus difficile Ă comprendre, plus sujets aux fautes, et plus difficiles Ă changer que les systèmes sans les dĂŠfauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une ĂŠtude empirique sur lâimpact des dĂŠfauts de conception sur la comprĂŠhension et aucun dâentre eux nâa ĂŠtudiĂŠ lâimpact des dĂŠfauts de conception sur lâeffort des dĂŠveloppeurs pour corriger les fautes.
Dans cette thèse, nous proposons trois principales contributions. La première contribution est une ĂŠtude empirique pour apporter des preuves de lâimpact des dĂŠfauts de conception sur la comprĂŠhension et le changement. Nous concevons et effectuons deux expĂŠriences avec 59 sujets, afin dâĂŠvaluer lâimpact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des dĂŠveloppeurs effectuant des tâches de comprĂŠhension et de changement. Nous mesurons la performance des dĂŠveloppeurs en utilisant: (1) lâindice de charge de travail de la NASA pour leurs efforts, (2) le temps quâils ont passĂŠ dans lâaccomplissement de leurs tâches, et (3) les pourcentages de bonnes rĂŠponses. Les rĂŠsultats des deux expĂŠriences ont montrĂŠ que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des dĂŠveloppeurs lors de tâches de comprĂŠhension et de changement. Les rĂŠsultats obtenus justifient les recherches antĂŠrieures sur la spĂŠcification et la dĂŠtection des dĂŠfauts de conception. Les ĂŠquipes de dĂŠveloppement de logiciels doivent mettre en garde les dĂŠveloppeurs contre le nombre ĂŠlevĂŠ dâoccurrences de dĂŠfauts de conception et recommander des refactorisations Ă chaque ĂŠtape du processus de dĂŠveloppement pour supprimer ces dĂŠfauts de conception quand câest possible.
Dans la deuxième contribution, nous ĂŠtudions la relation entre les dĂŠfauts de conception et les fautes. Nous ĂŠtudions lâimpact de la prĂŠsence des dĂŠfauts de conception sur lâeffort nĂŠcessaire pour corriger les fautes. Nous mesurons lâeffort pour corriger les fautes Ă lâaide de trois indicateurs: (1) la durĂŠe de la pĂŠriode de correction, (2) le nombre de champs et mĂŠthodes touchĂŠs par la correction des fautes et (3) lâentropie des corrections de fautes dans le code-source. Nous menons une ĂŠtude empirique avec 12 dĂŠfauts de conception dĂŠtectĂŠs dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos rĂŠsultats ont montrĂŠ que la durĂŠe de la pĂŠriode de correction est plus longue pour les fautes impliquant des classes avec des dĂŠfauts de conception. En outre, la correction des fautes dans les classes avec des dĂŠfauts de conception fait changer plus de fichiers, plus les champs et des mĂŠthodes. Nous avons ĂŠgalement observĂŠ que, après la correction dâune faute, le nombre dâoccurrences de dĂŠfauts de conception dans les classes impliquĂŠes dans la correction de la faute diminue. Comprendre lâimpact des dĂŠfauts de conception sur lâeffort des dĂŠveloppeurs pour corriger les fautes est important afin dâaider les ĂŠquipes de dĂŠveloppement pour mieux ĂŠvaluer et prĂŠvoir lâimpact de leurs dĂŠcisions de conception et donc canaliser leurs efforts pour amĂŠliorer la qualitĂŠ de leurs systèmes. Les ĂŠquipes de dĂŠveloppement doivent contrĂ´ler et supprimer les dĂŠfauts de conception de leurs systèmes car ils sont susceptibles dâaugmenter les efforts de changement.
La troisième contribution concerne la dĂŠtection des dĂŠfauts de conception. Pendant les activitĂŠs de maintenance, il est important de disposer dâun outil capable de dĂŠtecter les dĂŠfauts de conception de façon incrĂŠmentale et itĂŠrative. Ce processus de dĂŠtection incrĂŠmentale et itĂŠrative pourrait rĂŠduire les coĂťts, les efforts et les ressources en permettant aux praticiens dâidentifier et de prendre en compte les occurrences de dĂŠfauts de conception comme ils les trouvent lors de la comprĂŠhension et des changements. Les chercheurs ont proposĂŠ des approches pour dĂŠtecter les occurrences de dĂŠfauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nĂŠcessitent une connaissance approfondie des dĂŠfauts de conception, (2) elles ont une prĂŠcision et un rappel limitĂŠs, (3) elles ne sont pas itĂŠratives et incrĂŠmentales et (4) elles ne peuvent pas ĂŞtre appliquĂŠes sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour dĂŠtecter les dĂŠfauts de conception, basĂŠ sur une technique dâapprentissage automatique â machines Ă vecteur de support â et prenant en compte les retours des praticiens. Grâce Ă une ĂŠtude empirique portant sur trois systèmes et quatre dĂŠfauts de conception, nous avons montrĂŠ que la prĂŠcision et le rappel de SMURF sont supĂŠrieurs Ă ceux de DETEX et BDTEX lors de la dĂŠtection des occurrences de dĂŠfauts de conception. Nous avons ĂŠgalement montrĂŠ que SMURF peut ĂŞtre appliquĂŠ Ă la fois dans les configurations intra-système et inter-système. Enfin, nous avons montrĂŠ que la prĂŠcision et le rappel de SMURF sont amĂŠliorĂŠs quand on prend en compte les retours des praticiens.Changes are continuously made in the source code to take into account the needs of the customers and fix the faults. Continuous change can lead to antipatterns and code smells, collectively called âdesign smellsâ to occur in the source code. Design smells are poor solutions to recurring design or implementation problems, typically in object-oriented development. During comprehension and changes activities and due to the time-to-market, lack of understanding, and the developersâ experience, developers cannot always follow standard designing and coding techniques, i.e., design patterns. Consequently, they introduce design smells in their systems. In the literature, several authors claimed that design smells make object-oriented software systems more difficult to understand, more fault-prone, and harder to change than systems without such design smells. Yet, few of these authors empirically investigate the impact of design smells on software understandability and none of them authors studied the impact of design smells on developersâ effort.
In this thesis, we propose three principal contributions. The first contribution is an empirical study to bring evidence of the impact of design smells on comprehension and change. We design and conduct two experiments with 59 subjects, to assess the impact of the composition of two Blob or two Spaghetti Code on the performance of developers performing comprehension and change tasks. We measure developersâ performance using: (1) the NASA task load index for their effort; (2) the time that they spent performing their tasks; and, (3) their percentages of correct answers. The results of the two experiments showed that two occurrences of Blob or Spaghetti Code design smells impedes significantly developers performance during comprehension and change tasks. The obtained results justify a posteriori previous researches on the specification and detection of design smells. Software development teams should warn developers against high number of occurrences of design smells and recommend refactorings at each step of the development to remove them when possible.
In the second contribution, we investigate the relation between design smells and faults in classes from the point of view of developers who must fix faults. We study the impact of the presence of design smells on the effort required to fix faults, which we measure using three metrics: (1) the duration of the fixing period; (2) the number of fields and methods impacted by fault-fixes; and, (3) the entropy of the fault-fixes in the source code. We conduct an empirical study with 12 design smells detected in 54 releases of four systems: ArgoUML, Eclipse, Mylyn, and Rhino. Our results showed that the duration of the fixing period is longer for faults involving classes with design smells. Also, fixing faults in classes with design smells impacts more files, more fields, and more methods. We also observed that after a fault is fixed, the number of occurrences of design smells in the classes involved in the fault decreases. Understanding the impact of design smells on development effort is important to help development teams better assess and forecast the impact of their design decisions and therefore lead their effort to improve the quality of their software systems. Development teams should monitor and remove design smells from their software systems because they are likely to increase the change efforts.
The third contribution concerns design smells detection. During maintenance and evolution tasks, it is important to have a tool able to detect design smells incrementally and iteratively. This incremental and iterative detection process could reduce costs, effort, and resources by allowing practitioners to identify and take into account occurrences of design smells as they find them during comprehension and change. Researchers have proposed approaches to detect occurrences of design smells but these approaches have currently four limitations: (1) they require extensive knowledge of design smells; (2) they have limited precision and recall; (3) they are not incremental; and (4) they cannot be applied on subsets of systems. To overcome these limitations, we introduce SMURF, a novel approach to detect design smells, based on a machine learning techniqueâsupport vector machinesâand taking into account practitionersâ feedback. Through an empirical study involving three systems and four design smells, we showed that the accuracy of SMURF is greater than that of DETEX and BDTEX when detecting design smells occurrences. We also showed that SMURF can be applied in both intra-system and inter-system configurations. Finally, we reported that SMURF accuracy improves when using practitionersâ feedback
Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles
With the further development of automated driving, the functional performance increases resulting in the need for new and comprehensive testing concepts. This doctoral work aims to enable the transition from quantitative mileage to qualitative test coverage by aggregating the results of both knowledge-based and data-driven test platforms. The validity of the test domain can be extended cost-effectively throughout the software development process to achieve meaningful test termination criteria
Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles - Technological and Methodical Approaches
Fahrerassistenzsysteme sowie automatisiertes Fahren leisten einen wesentlichen Beitrag zur Verbesserung der Verkehrssicherheit von Kraftfahrzeugen, insbesondere von Nutzfahrzeugen. Mit der Weiterentwicklung des automatisierten Fahrens steigt hierbei die funktionale Leistungsfähigkeit, woraus Anforderungen an neue, gesamtheitliche Erprobungskonzepte entstehen. Um die Absicherung hÜherer Stufen von automatisierten Fahrfunktionen zu garantieren, sind neuartige Verifikations- und Validierungsmethoden erforderlich.
Ziel dieser Arbeit ist es, durch die Aggregation von Testergebnissen aus wissensbasierten und datengetriebenen Testplattformen den Ăbergang von einer quantitativen Kilometerzahl zu einer qualitativen Testabdeckung zu ermĂśglichen. Die adaptive Testabdeckung zielt somit auf einen Kompromiss zwischen Effizienz- und Effektivitätskriterien fĂźr die Absicherung von automatisierten Fahrfunktionen in der Produktentstehung von Nutzfahrzeugen ab.
Diese Arbeit umfasst die Konzeption und Implementierung eines modularen Frameworks zur kundenorientierten Absicherung automatisierter Fahrfunktionen mit vertretbarem Aufwand. Ausgehend vom Konfliktmanagement fßr die Anforderungen der Teststrategie werden hochautomatisierte Testansätze entwickelt. Dementsprechend wird jeder Testansatz mit seinen jeweiligen Testzielen integriert, um die Basis eines kontextgesteuerten Testkonzepts zu realisieren. Die wesentlichen Beiträge dieser Arbeit befassen sich mit vier Schwerpunkten:
* Zunächst wird ein Co-Simulationsansatz präsentiert, mit dem sich die Sensoreingänge in einem Hardware-in-the-Loop-Prßfstand mithilfe synthetischer Fahrszenarien simulieren und/ oder stimulieren lassen. Der vorgestellte Aufbau bietet einen phänomenologischen Modellierungsansatz, um einen Kompromiss zwischen der Modellgranularität und dem
Rechenaufwand der Echtzeitsimulation zu erreichen. Diese Methode wird fßr eine modulare Integration von Simulationskomponenten, wie Verkehrssimulation und Fahrdynamik, verwendet, um relevante Phänomene in kritischen Fahrszenarien zu modellieren.
* Danach wird ein Messtechnik- und Datenanalysekonzept fßr die weltweite Absicherung von automatisierten Fahrfunktionen vorgestellt, welches eine Skalierbarkeit zur Aufzeichnung von Fahrzeugsensor- und/ oder Umfeldsensordaten von spezifischen Fahrereignissen einerseits und permanenten Daten zur statistischen Absicherung und Softwareentwicklung andererseits erlaubt. Messdaten aus länderspezifischen Feldversuchen werden aufgezeichnet und zentral in einer Cloud-Datenbank gespeichert.
* AnschlieĂend wird ein ontologiebasierter Ansatz zur Integration einer komplementären Wissensquelle aus Feldbeobachtungen in ein Wissensmanagementsystem beschrieben. Die Gruppierung von Aufzeichnungen wird mittels einer ereignisbasierten Zeitreihenanalyse mit hierarchischer Clusterbildung und normalisierter Kreuzkorrelation realisiert. Aus dem extrahierten Cluster und seinem Parameterraum lassen sich die Eintrittswahrscheinlichkeit jedes logischen Szenarios und die Wahrscheinlichkeitsverteilungen der zugehĂśrigen Parameter ableiten. Durch die Korrelationsanalyse von synthetischen und naturalistischen Fahrszenarien wird die anforderungsbasierte Testabdeckung adaptiv und systematisch durch ausfĂźhrbare Szenario-Spezifikationen erweitert.
* SchlieĂlich wird eine prospektive Risikobewertung als invertiertes Konfidenzniveau der messbaren Sicherheit mithilfe von Sensitivitäts- und Zuverlässigkeitsanalysen durchgefĂźhrt. Der Versagensbereich kann im Parameterraum identifiziert werden, um die Versagenswahrscheinlichkeit fĂźr jedes extrahierte logische Szenario durch verschiedene Stichprobenverfahren, wie beispielsweise die Monte-Carlo-Simulation und Adaptive-Importance-Sampling, vorherzusagen. Dabei fĂźhrt die geschätzte Wahrscheinlichkeit einer Sicherheitsverletzung fĂźr jedes gruppierte logische Szenario zu einer messbaren Sicherheitsvorhersage.
Das vorgestellte Framework erlaubt es, die LĂźcke zwischen wissensbasierten und datengetriebenen Testplattformen zu schlieĂen, um die Wissensbasis fĂźr die Abdeckung der Operational Design Domains konsequent zu erweitern.
Zusammenfassend zeigen die Ergebnisse den Nutzen und die Herausforderungen des entwickelten Frameworks fßr messbare Sicherheit durch ein Vertrauensmaà der Risikobewertung. Dies ermÜglicht eine kosteneffiziente Erweiterung der Validität der Testdomäne im gesamten Softwareentwicklungsprozess, um die erforderlichen Testabbruchkriterien zu erreichen
Dynamic data flow testing
Data flow testing is a particular form of testing that identifies data flow relations as test objectives. Data flow testing has recently attracted new interest in the context of testing object oriented systems, since data flow information is well suited to capture relations among the object states, and can thus provide useful information for testing method interactions. Unfortunately, classic data flow testing, which is based on static analysis of the source code, fails to identify many important data flow relations due to the dynamic nature of object oriented systems. This thesis presents Dynamic Data Flow Testing, a technique which rethinks data flow testing to suit the testing of modern object oriented software. Dynamic Data Flow Testing stems from empirical evidence that we collect on the limits of classic data flow testing techniques. We investigate such limits by means of Dynamic Data Flow Analysis, a dynamic implementation of data flow analysis that computes sound data flow information on program traces. We compare data flow information collected with static analysis of the code with information observed dynamically on execution traces, and empirically observe that the data flow information computed with classic analysis of the source code misses a significant part of information that corresponds to relevant behaviors that shall be tested. In view of these results, we propose Dynamic Data Flow Testing. The technique promotes the synergies between dynamic analysis, static reasoning and test case generation for automatically extending a test suite with test cases that execute the complex state based interactions between objects. Dynamic Data Flow Testing computes precise data flow information of the program with Dynamic Data Flow Analysis, processes the dynamic information to infer new test objectives, which Dynamic Data Flow Testing uses to generate new test cases. The test cases generated by Dynamic Data Flow Testing exercise relevant behaviors that are otherwise missed by both the original test suite and test suites that satisfy classic data flow criteria
The application of classical conditioning to the machine learning of a commonsense knowledge of visual events
In the field of artificial intelligence, possession of commonsense knowledge has long been considered to be a requirementto construct a machine that possesses
artificial general intelligence. The conventional approach to providing this commonsense knowledge is to manually encode the required knowledge, a process that is both tedious and costly. After an analysis of classical conditioning, it was deemed that constructing a system based upon the stimulusstimulus interpretation of classical conditioning could allow for commonsense knowledge to be learned through a machine directly and passively observing its environment. Based upon these principles, a system was constructed that uses a stream of events, that have been observed within the environment, to learn rules regarding what event is likely to follow after the observation of another event. The system makes use of a feedback loop between three sub-systems: one that associates events that occur together, a second that accumulates evidence
that a given association is significant and a third that recognises the significant associations. The recognition of past associations allows for both the creation of evidence for and against the existence of a particular association,
and also allows for more complex associations to be created by treating instances of strongly associated event pairs to be themselves events. Testing the abilities of the system involved simulating the three different learning environments. The results found that measures of significance based on classical conditioning generally outperformed a probability-based measure. This thesis
contributes a theory of how a stimulus-stimulus interpretation classical conditioning can be used to create commonsense knowledge and an observation that a significant sub-set of classical conditioning phenomena likely exist to aid in the elimination of noise. This thesis also represents a significant departure from existing reinforcement learning systems as the system presented in this thesis does not perform any form of action selection
- âŚ