382 research outputs found

    A concern visualization approach for improving MATLAB and octave program comprehension

    Get PDF
    The literature has pointed out the need for focusing efforts to better support comprehension of MATLAB and Octave programs. Despite being largely used in the industry and academia in the engineering domain, programs and routines written in those languages still require efforts to propose approaches and tools for its understanding. Considering the use of crosscutting concerns (CCCs) to support the comprehension of object-oriented programs, there is room of its use in the context of MATLAB and Octave programs. The literature has purpose and examples in this direction. Considering this scenario, we propose the use of visualization enriched with CCCs representation to support the comprehension of such programs. This paper discusses the use of a multiple view interactive environment called OctMiner in the context of two case studies to characterize how collected information relating to crosscutting concerns can foster the comprehension of MATLAB and GNU/Octave programs. As a result of the conducted case studies, we propose strategies based on OctMiner and tailored to support different comprehension activities of programs written in MATLAB and Octave.info:eu-repo/semantics/acceptedVersio

    Visualization for Finite Element Method Education

    Get PDF
    In this project, common practices for visualizing scientific data were studied. In addition, the science of cognition and data display was reviewed. The results of this investigation was applied to augment a Civil Engineering introductory course on Finite Element Method at WPI. Software enhancements allowed three dimensional visualization for simulation of engineering structures. The research on cognition and data graphics was used to improve understanding of these visual aids. The plotting function, developed in MATLAB and Julia environments during the course of this project, can help all students visualize the results of their numerical codes

    Visualization for Finite Element Method Education

    Get PDF
    In this project, common practices for visualizing scientific data were studied. In addition, the science of cognition and data display was reviewed. The results of this investigation was applied to augment a Civil Engineering introductory course on Finite Element Method at WPI. Software enhancements allowed three dimensional visualization for simulation of engineering structures. The research on cognition and data graphics was used to improve understanding of these visual aids. The plotting function, developed in MATLAB and Julia environments during the course of this project, can help all students visualize the results of there numerical codes

    Machine Learning And Natural Language Methods For Detecting Psychopathy In Textual Data

    Get PDF
    Among the myriad of mental conditions permeating through society, psychopathy is perhaps the most elusive to diagnose and treat. With the advent of natural language processing and machine learning, however, we have ushered in a new age of technology that provides a fresh toolkit for analyzing text and context. Because text remains the medium of choice for most personal and professional interactions, it may be possible to use textual samples from psychopaths as a means for understanding and ultimately classifying similar individuals based on the content of their language usage. This paper aims to investigate natural language processing and supervised machine learning methods for detecting and classifying psychopaths based on text. First, I investigate psychopathic texts using natural language processing to tease out major trends that appear in the classical psychological literature. I look at ways to meaningfully visualizing important features within the corpus and examine procedures for statistically comparing the use of function words of psychopaths versus non-psychopaths. Second, I use a “bag of words” approach to investigate the effectiveness of unary-classification and binary-classification methods for determining whether text shows psychopathic indicators. Lastly, I apply standard optimization techniques to tune hyperparameters to yield the best results, while also using a random forest approach to identify and select the most meaningful features. Ultimately, the aim of this research is to validate or disqualify traditional vector-space models on a corpus whose authors consistently try to hide in plain sight

    A Perceptual Comparison of “Black Box” Modeling Algorithms for Nonlinear Audio Systems

    Get PDF
    Nonlinear systems identification is a widespread topic of interest, particularly within the audio industry, as these techniques are employed to synthesize black box models of nonlinear audio effects. Given the myriad approaches to black box modeling, questions arise as to whether an “optimal” approach exists, or one that achieves valid subjective results as a model with minimal computational expense. This thesis uses ABX listening tests to compare black box models of three hardware audio effects using two popular nonlinear implementations, along with two proposed modified implementations. Models were constructed in the Hammerstein form using sine sweeps and a novel measurement technique for the filters and nonlinearities, respectively. Testing revolved around null hypotheses assuming no change in model identification regardless of the device modeled, implementation used, or program material of the model stimulus. Results provide clear evidence of an effect on all of these accounts, and support a full rejection of the null hypotheses. Outcomes demonstrate a preferable implementation out of the algorithms tested, and suggest the removal of certain implementations as valid approaches altogether

    A computational framework for sound segregation in music signals

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    KOLAM : human computer interfaces fro visual analytics in big data imagery

    Get PDF
    In the present day, we are faced with a deluge of disparate and dynamic information from multiple heterogeneous sources. Among these are the big data imagery datasets that are rapidly being generated via mature acquisition methods in the geospatial, surveillance (specifically, Wide Area Motion Imagery or WAMI) and biomedical domains. The need to interactively visualize these imagery datasets by using multiple types of views (as needed) into the data is common to these domains. Furthermore, researchers in each domain have additional needs: users of WAMI datasets also need to interactively track objects of interest using algorithms of their choice, visualize the resulting object trajectories and interactively edit these results as needed. While software tools that fulfill each of these requirements individually are available and well-used at present, there is still a need for tools that can combine the desired aspects of visualization, human computer interaction (HCI), data analysis, data management, and (geo-)spatial and temporal data processing into a single flexible and extensible system. KOLAM is an open, cross-platform, interoperable, scalable and extensible framework for visualization and analysis that we have developed to fulfil the above needs. The novel contributions in this thesis are the following: 1) Spatio-temporal caching for animating both giga-pixel and Full Motion Video (FMV) imagery, 2) Human computer interfaces purposefully designed to accommodate big data visualization, 3) Human-in-the-loop interactive video object tracking - ground-truthing of moving objects in wide area imagery using algorithm assisted human-in-the-loop coupled tracking, 4) Coordinated visualization using stacked layers, side-by-side layers/video sub-windows and embedded imagery, 5) Efficient one-click manual tracking, editing and data management of trajectories, 6) Efficient labeling of image segmentation regions and passing these results to desired modules, 7) Visualization of image processing results generated by non-interactive operators using layers, 8) Extension of interactive imagery and trajectory visualization to multi-monitor wall display environments, 9) Geospatial applications: Providing rapid roam, zoom and hyper-jump spatial operations, interactive blending, colormap and histogram enhancement, spherical projection and terrain maps, 10) Biomedical applications: Visualization and target tracking of cell motility in time-lapse cell imagery, collecting ground-truth from experts on whole-slide imagery (WSI) for developing histopathology analytic algorithms and computer-aided diagnosis for cancer grading, and easy-to-use tissue annotation features.Includes bibliographical reference

    Teaching and Learning of Fluid Mechanics

    Get PDF
    This book contains research on the pedagogical aspects of fluid mechanics and includes case studies, lesson plans, articles on historical aspects of fluid mechanics, and novel and interesting experiments and theoretical calculations that convey complex ideas in creative ways. The current volume showcases the teaching practices of fluid dynamicists from different disciplines, ranging from mathematics, physics, mechanical engineering, and environmental engineering to chemical engineering. The suitability of these articles ranges from early undergraduate to graduate level courses and can be read by faculty and students alike. We hope this collection will encourage cross-disciplinary pedagogical practices and give students a glimpse of the wide range of applications of fluid dynamics

    Cultural Context-Aware Models and IT Applications for the Exploitation of Musical Heritage

    Get PDF
    Information engineering has always expanded its scope by inspiring innovation in different scientific disciplines. In particular, in the last sixty years, music and engineering have forged a strong connection in the discipline known as “Sound and Music Computing”. Musical heritage is a paradigmatic case that includes several multi-faceted cultural artefacts and traditions. Several issues arise from the analog-digital transfer of cultural objects, concerning their creation, preservation, access, analysis and experiencing. The keystone is the relationship of these digitized cultural objects with their carrier and cultural context. The terms “cultural context” and “cultural context awareness” are delineated, alongside the concepts of contextual information and metadata. Since they maintain the integrity of the object, its meaning and cultural context, their role is critical. This thesis explores three main case studies concerning historical audio recordings and ancient musical instruments, aiming to delineate models to preserve, analyze, access and experience the digital versions of these three prominent examples of musical heritage. The first case study concerns analog magnetic tapes, and, in particular, tape music, a particular experimental music born in the second half of the XX century. This case study has relevant implications from the musicology, philology and archivists’ points of view, since the carrier has a paramount role and the tight connection with its content can easily break during the digitization process or the access phase. With the aim to help musicologists and audio technicians in their work, several tools based on Artificial Intelligence are evaluated in tasks such as the discontinuity detection and equalization recognition. By considering the peculiarities of tape music, the philological problem of stemmatics in digitized audio documents is tackled: an algorithm based on phylogenetic techniques is proposed and assessed, confirming the suitability of these techniques for this task. Then, a methodology for a historically faithful access to digitized tape music recordings is introduced, by considering contextual information and its relationship with the carrier and the replay device. Based on this methodology, an Android app which virtualizes a tape recorder is presented, together with its assessment. Furthermore, two web applications are proposed to faithfully experience digitized 78 rpm discs and magnetic tape recordings, respectively. Finally, a prototype of web application for musicological analysis is presented. This aims to concentrate relevant part of the knowledge acquired in this work into a single interface. The second case study is a corpus of Arab-Andalusian music, suitable for computational research, which opens new opportunities to musicological studies by applying data-driven analysis. The description of the corpus is based on the five criteria formalized in the CompMusic project of the University Pompeu Fabra of Barcelona: purpose, coverage, completeness, quality and re-usability. Four Jupyter notebooks were developed with the aim to provide a useful tool for computational musicologists for analyzing and using data and metadata of such corpus. The third case study concerns an exceptional historical musical instrument: an ancient Pan flute exhibited at the Museum of Archaeological Sciences and Art of the University of Padova. The final objective was the creation of a multimedia installation to valorize this precious artifact and to allow visitors to interact with the archaeological find and to learn its history. The case study provided the opportunity to study a methodology suitable for the valorization of this ancient musical instrument, but also extendible to other artifacts or museum collections. Both the methodology and the resulting multimedia installation are presented, followed by the assessment carried out by a multidisciplinary group of experts

    Joint Estimation of Perceptual, Cognitive, and Neural Processes

    Get PDF
    Humans are remarkable in their ability to perform highly complicated behaviors with ease and little conscious thought. Successful speech comprehension, for example, requires the collaboration of multiple sensory, perceptual, and cognitive processes to focus attention on the speaker, disregard competing cues, correctly process incoming audio stimuli, and attach meaning and context to what is heard. Investigating these phenomena can help unravel crucial aspects of human behavior as well as how the brain works in health and disease. However, traditional methods typically involve isolating individual variables and evaluating their decontextualized contribution to an outcome variable of interest. While rigorous and more straightforward to interpret, these reductionist methods forfeit multidimensional inference and waste data resources by collecting identical data in every participant without considering what is the most relevant for any given participant. Methods that can optimize the exact data collected for each participant would be useful for constructing more complex models and for optimizing expensive data collection. Modern tools, such as mobile hardware and large databases, have been implemented to improve upon traditional methods but are still limited in the amount of inference they can provide about an individual. To circumvent these obstacles, a novel machine learning framework capable of quantifying behavioral functions of multiple variables with practical amounts of data has been developed and validated. This framework is capable of linking even loosely related input domains and measuring shared information in one comprehensive assessment. The work described in this thesis first evaluates this framework for active machine learning audiogram (AMLAG) applications. AMLAG customizes the generalized framework to efficiently, accurately, and reliably estimate audiogram functions. Audiograms provide a measure of hearing ability for each ear in the inherently two-dimensional domain of frequency and intensity. Where clinical methods rely on reducing audiogram acquisition to a one-dimensional assessment, AMLAG has been previously verified to provide a continuous, two-dimensional estimate of hearing ability in one ear. Modeling two ears that are physiologically distinct but are defined in the same frequency-intensity input domain, AMLAG was extended to bilateral audiogram acquisition. Left and right ears are traditionally evaluated completely unilaterally. To realize potential gains, AMLAG was generalized from two unilateral tests to a single bilateral test. The active bilateral audiogram allows observations in one ear to simultaneously update the model fit over both ears. This thesis shows that in a cohort of normal-hearing and hearing-impaired listeners, the bilateral audiogram converges to its final estimates significantly faster than sequential active unilateral audiograms. The flexibility of a framework capable of informative individual inference was then evaluated for dynamically masked audiograms. When one ear of an individual can hear significantly better than the other ear, assessing the worse ear with loud probe tones may require delivering masking noise to the better ear in order to prevent the probe tones from inadvertently being heard by the better ear. Current masking protocols are confusing, laborious and time consuming. Adding a standardized masking protocol to the AMLAG procedure alleviates all of these drawbacks by dynamically adapting the masking to an individualŐł specific needs. Dynamically masked audiograms are shown to achieve accurate threshold estimates and reduce test time compared to current clinical masking procedures used to evaluate individuals with highly asymmetric hearing, yet can also be used effectively and efficiently for anyone. Finally, the active machine learning framework was evaluated for estimating cognitive and perceptual variables in one joint assessment. Combining a verbal N-back and speech-in-noise assessment, a joint estimator links two disjoint assessments defined by two unique input domains and, for the first time, offers a direct measurement of the interactions between two of the most predictive measures of cognitive decline. Young and older healthy adults were assessed to investigate age-related adaptations in behavior and the inter-subject variability that is often seen in low-dimensional speech and memory tests. The joint cognitive and perceptual test accurately predicted standalone N-back but not speech-in-noise performance. This first implementation did not reveal significant interactions between speech and memory. However, the joint task framework did provide an estimate of participant performance over the entire two-dimensional domain without any experimenter-observed scoring and may better mirror the challenges of real-world tasks. While significant age-related differences were apparent, substantial within group variance led to evaluating joint test performance in predicting individual differences in neural activity. Speech-in-noise tests may activate non-auditory specific networks of the brain as age and task difficulty increase. Some of these regions are domain-general networks that are also active during verbal working memory tests. Functional brain images were collected during an in-scanner speech-in-noise test for a portion of the joint test participants. Individual brain activity at regions of interest in the frontoparietal, cingulo-opercular, and speech networks was correlated to performance on the joint speech and memory test. No significant correlations were found, but the joint estimation of neural, cognitive, and perceptual behaviors through this framework may be possible with further test adaptations. Generally, the lack of significant findings does not detract from the feasibility and utility of a generalized framework that can accurately model complex cognitive, perceptual, and neural processes in individuals. As demonstrated in this thesis, high-dimensional, individual testing procedures facilitate the direct assessment of complicated human behaviors empowering equitable, informative, and effective test methods
    • …
    corecore