10,359 research outputs found

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing \approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page

    Microstimulation and multicellular analysis: A neural interfacing system for spatiotemporal stimulation

    Get PDF
    Willfully controlling the focus of an extracellular stimulus remains a significant challenge in the development of neural prosthetics and therapeutic devices. In part, this challenge is due to the vast set of complex interactions between the electric fields induced by the microelectrodes and the complex morphologies and dynamics of the neural tissue. Overcoming such issues to produce methodologies for targeted neural stimulation requires a system that is capable of (1) delivering precise, localized stimuli a function of the stimulating electrodes and (2) recording the locations and magnitudes of the resulting evoked responses a function of the cell geometry and membrane dynamics. In order to improve stimulus delivery, we developed microfabrication technologies that could specify the electrode geometry and electrical properties. Specifically, we developed a closed-loop electroplating strategy to monitor and control the morphology of surface coatings during deposition, and we implemented pulse-plating techniques as a means to produce robust, resilient microelectrodes that could withstand rigorous handling and harsh environments. In order to evaluate the responses evoked by these stimulating electrodes, we developed microscopy techniques and signal processing algorithms that could automatically identify and evaluate the electrical response of each individual neuron. Finally, by applying this simultaneous stimulation and optical recording system to the study of dissociated cortical cultures in multielectode arrays, we could evaluate the efficacy of excitatory and inhibitory waveforms. Although we found that the proximity of the electrode is a poor predictor of individual neural excitation thresholds, we have shown that it is possible to use inhibitory waveforms to globally reduce excitability in the vicinity of the electrode. Thus, the developed system was able to provide very high resolution insight into the complex set of interactions between the stimulating electrodes and populations of individual neurons.Ph.D.Committee Chair: Stephen P. DeWeerth; Committee Member: Bruce Wheeler; Committee Member: Michelle LaPlaca; Committee Member: Robert Lee; Committee Member: Steve Potte

    Organic transformation of ERP documentation practices: Moving from archival records to dialogue-based, agile throwaway documents

    Get PDF
    Implementing enterprise resource planning (ERP) systems remains challenging and requires organizational changes. Given the scale and complexity of ERP projects, documentation plays a crucial role in coordinating operational details. However, the emergence of the agile approach raises the question of how adequate lightweight documentation is in agile ERP implementation. Unfortunately, both academia and industry often overlook the natural evolution of documentation practices. This study examines current documentation practices through interviews with 23 field experts to address this oversight. The findings indicate a shift in documentation practices from retrospective approaches to dialogue-based, agile throwaway documents, including audiovisual recordings and informal emails. Project managers who extensively engage with throwaway documents demonstrate higher situational awareness and greater effectiveness in managing ERP projects than those who do not. The findings show an organic transformation of ERP documentation practices. We redefine documentation to include unstructured, relevant information across different media, emphasizing searchability. Additionally, the study offers two vignettes for diverse organizational contexts to illustrate the best practices of agile ERP projects.Organic transformation of ERP documentation practices: Moving from archival records to dialogue-based, agile throwaway documentspublishedVersionPaid open acces

    A dynamic clamping approach using in silico IK1 current for discrimination of chamber-specific hiPSC-derived cardiomyocytes

    Get PDF
    : Human induced pluripotent stem cell (hiPSC)-derived cardiomyocytes (CM) constitute a mixed population of ventricular-, atrial-, nodal-like cells, limiting the reliability for studying chamber-specific disease mechanisms. Previous studies characterised CM phenotype based on action potential (AP) morphology, but the classification criteria were still undefined. Our aim was to use in silico models to develop an automated approach for discriminating the electrophysiological differences between hiPSC-CM. We propose the dynamic clamp (DC) technique with the injection of a specific IK1 current as a tool for deriving nine electrical biomarkers and blindly classifying differentiated CM. An unsupervised learning algorithm was applied to discriminate CM phenotypes and principal component analysis was used to visualise cell clustering. Pharmacological validation was performed by specific ion channel blocker and receptor agonist. The proposed approach improves the translational relevance of the hiPSC-CM model for studying mechanisms underlying inherited or acquired atrial arrhythmias in human CM, and for screening anti-arrhythmic agents

    Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development

    Full text link
    Mobile devices and platforms have become an established target for modern software developers due to performant hardware and a large and growing user base numbering in the billions. Despite their popularity, the software development process for mobile apps comes with a set of unique, domain-specific challenges rooted in program comprehension. Many of these challenges stem from developer difficulties in reasoning about different representations of a program, a phenomenon we define as a "language dichotomy". In this paper, we reflect upon the various language dichotomies that contribute to open problems in program comprehension and development for mobile apps. Furthermore, to help guide the research community towards effective solutions for these problems, we provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference on Program Comprehension (ICPC'18

    Extracellular electrophysiology with close-packed recording sites: spike sorting and characterization

    Get PDF
    Advances in recording technologies now allow us to record populations of neurons simultaneously, data necessary to understand the network dynamics of the brain. Extracellular probes are fabricated with ever greater numbers of recording sites to capture the activity of increasing numbers of neurons. However, the utility of this extracellular data is limited by an initial analysis step, spike sorting, that extracts the activity patterns of individual neurons from the extracellular traces. Commonly used spike sorting methods require manual processing that limits their scalability, and errors can bias downstream analyses. Leveraging the replication of the activity from a single neuron on nearby recording sites, we designed a spike sorting method consisting of three primary steps: (1) a blind source separation algorithm to estimate the underlying source components, (2) a spike detection algorithm to find the set of spikes from each component best separated from background activity and (3) a classifier to evaluate if a set of spikes came from one individual neuron. To assess the accuracy of our method, we simulated multi-electrode array data that encompass many of the realistic variations and the sources of noise in in vivo neural data. Our method was able to extract individual simulated neurons in an automated fashion without any errors in spike assignment. Further, the number of neurons extracted increased as we increased recording site count and density. To evaluate our method in vivo, we performed both extracellular recording with our close-packed probes and a co-localized patch clamp recording, directly measuring one neuron’s ground truth set of spikes. Using this in vivo data we found that when our spike sorting method extracted the patched neuron, the spike assignment error rates were at the low end of reported error rates, and that our errors were frequently the result of failed spike detection during bursts where spike amplitude decreased into the noise. We used our in vivo data to characterize the extracellular recordings of burst activity and more generally what an extracellular electrode records. With this knowledge, we updated our spike detector to capture more burst spikes and improved our classifier based on our characterizations

    SigMate: a MATLAB-based automated tool for extracellular neuronal signal processing and analysis

    Get PDF
    Rapid advances in neuronal probe technology for multisite recording of brain activity have posed a significant challenge to neuroscientists for processing and analyzing the recorded signals. To be able to infer meaningful conclusions quickly and accurately from large datasets, automated and sophisticated signal processing and analysis tools are required. This paper presents a Matlab-based novel tool, “SigMate”, incorporating standard methods to analyze spikes and EEG signals, and in-house solutions for local field potentials (LFPs) analysis. Available modules at present are – 1. In-house developed algorithms for: data display (2D and 3D), file operations (file splitting, file concatenation, and file column rearranging), baseline correction, slow stimulus artifact removal, noise characterization and signal quality assessment, current source density (CSD) analysis, latency estimation from LFPs and CSDs, determination of cortical layer activation order using LFPs and CSDs, and single LFP clustering; 2. Existing modules: spike detection, sorting and spike train analysis, and EEG signal analysis. SigMate has the flexibility of analyzing multichannel signals as well as signals from multiple recording sources. The in-house developed tools for LFP analysis have been extensively tested with signals recorded using standard extracellular recording electrode, and planar and implantable multi transistor array (MTA) based neural probes. SigMate will be disseminated shortly to the neuroscience community under the open-source GNU-General Public License

    Novel Use of Matched Filtering for Synaptic Event Detection and Extraction

    Get PDF
    Efficient and dependable methods for detection and measurement of synaptic events are important for studies of synaptic physiology and neuronal circuit connectivity. As the published methods with detection algorithms based upon amplitude thresholding and fixed or scaled template comparisons are of limited utility for detection of signals with variable amplitudes and superimposed events that have complex waveforms, previous techniques are not applicable for detection of evoked synaptic events in photostimulation and other similar experimental situations. Here we report on a novel technique that combines the design of a bank of approximate matched filters with the detection and estimation theory to automatically detect and extract photostimluation-evoked excitatory postsynaptic currents (EPSCs) from individually recorded neurons in cortical circuit mapping experiments. The sensitivity and specificity of the method were evaluated on both simulated and experimental data, with its performance comparable to that of visual event detection performed by human operators. This new technique was applied to quantify and compare the EPSCs obtained from excitatory pyramidal cells and fast-spiking interneurons. In addition, our technique has been further applied to the detection and analysis of inhibitory postsynaptic current (IPSC) responses. Given the general purpose of our matched filtering and signal recognition algorithms, we expect that our technique can be appropriately modified and applied to detect and extract other types of electrophysiological and optical imaging signals

    Neonatal seizure detection based on single-channel EEG: instrumentation and algorithms

    Get PDF
    Seizure activity in the perinatal period, which constitutes the most common neurological emergency in the neonate, can cause brain disorders later in life or even death depending on their severity. This issue remains unsolved to date, despite the several attempts in tackling it using numerous methods. Therefore, a method is still needed that can enable neonatal cerebral activity monitoring to identify those at risk. Currently, electroencephalography (EEG) and amplitude-integrated EEG (aEEG) have been exploited for the identification of seizures in neonates, however both lack automation. EEG and aEEG are mainly visually analysed, requiring a specific skill set and as a result the presence of an expert on a 24/7 basis, which is not feasible. Additionally, EEG devices employed in neonatal intensive care units (NICU) are mainly designed around adults, meaning that their design specifications are not neonate specific, including their size due to multi-channel requirement in adults - adults minimum requirement is ≥ 32 channels, while gold standard in neonatal is equal to 10; they are bulky and occupy significant space in NICU. This thesis addresses the challenge of reliably, efficiently and effectively detecting seizures in the neonatal brain in a fully automated manner. Two novel instruments and two novel neonatal seizure detection algorithms (SDAs) are presented. The first instrument, named PANACEA, is a high-performance, wireless, wearable and portable multi-instrument, able to record neonatal EEG, as well as a plethora of (bio)signals. This device despite its high-performance characteristics and ability to record EEG, is mostly suggested to be used for the concurrent monitoring of other vital biosignals, such as electrocardiogram (ECG) and respiration, which provide vital information about a neonate's medical condition. The two aforementioned biosignals constitute two of the most important artefacts in the EEG and their concurrent acquisition benefit the SDA by providing information to an artefact removal algorithm. The second instrument, called neoEEG Board, is an ultra-low noise, wireless, portable and high precision neonatal EEG recording instrument. It is able to detect and record minute signals (< 10 nVp) enabling cerebral activity monitoring even from lower layers in the cortex. The neoEEG Board accommodates 8 inputs each one equipped with a patent-pending tunable filter topology, which allows passband formation based on the application. Both the PANACEA and the neoEEG Board are able to host low- to middle-complexity SDAs and they can operate continuously for at least 8 hours on 3-AA batteries. Along with PANACEA and the neoEEG Board, two novel neonatal SDAs have been developed. The first one, termed G prime-smoothed (G ́_s), is an on-line, automated, patient-specific, single-feature and single-channel EEG based SDA. The G ́_s SDA, is enabled by the invention of a novel feature, termed G prime (G ́) and can be characterised as an energy operator. The trace that the G ́_s creates, can also be used as a visualisation tool because of its distinct change at a presence of a seizure. Finally, the second SDA is machine learning (ML)-based and uses numerous features and a support vector machine (SVM) classifier. It can be characterised as automated, on-line and patient-independent, and similarly to G ́_s it makes use of a single-channel EEG. The proposed neonatal SDA introduces the use of the Hilbert-Huang transforms (HHT) in the field of neonatal seizure detection. The HHT analyses the non-linear and non-stationary EEG signal providing information for the signal as it evolves. Through the use of HHT novel features, such as the per intrinsic mode function (IMF) (0-3 Hz) sub-band power, were also employed. Detection rates of this novel neonatal SDA is comparable to multi-channel SDAs.Open Acces
    corecore