83 research outputs found

    Wrist-based Phonocardiogram Diagnosis Leveraging Machine Learning

    Get PDF
    With the tremendous growth of technology and the fast pace of life, the need for instant information has become an everyday necessity, more so in emergency cases when every minute counts towards saving lives. mHealth has been the adopted approach for quick diagnosis using mobile devices. However, it has been challenging due to the required high quality of data, high computation load, and high-power consumption. The aim of this research is to diagnose the heart condition based on phonocardiogram (PCG) analysis using Machine Learning techniques assuming limited processing power, in order to be encapsulated later in a mobile device. The diagnosis of PCG is performed using two techniques; 1. parametric estimation with multivariate classification, particularly discriminant function. Which will be explored at length using different number of descriptive features. The feature extraction will be performed using Wavelet Transform (Filter Bank). 2. Artificial Neural Networks, and specifically Pattern Recognition. This will also use decomposed version of PCG using Wavelet Transform (Filter Bank). The results showed 97.33% successful diagnosis using the first technique using PCG with a 19 dB Signal-to-Noise-Ratio. When the signal was decomposed into four sub-bands using a Filter Bank of the second order. Each sub-band was described using two features; the signal’s mean and covariance. Additionally, different Filter Bank orders and number of features are explored and compared. Using the second technique the diagnosis resulted in a 100% successful classification with 83.3% trust level. The results are assessed, and new improvements are recommended and discussed as part of future work.Teknologian valtavan kehittymisen ja nopean elämänrytmin myötä välittömästi saatu tieto on noussut jokapäiväiseksi välttämättömyydeksi, erityisesti hätätapauksissa, joissa jokainen säästetty minuutti on tärkeää ihmishenkien pelastamiseksi. Mobiiliterveys, eli mHealth, on yleisesti valjastettu käyttöön nopeaksi diagnoosimenetelmäksi mobiililaitteiden avulla. Käyttö on kuitenkin ollut haastavaa korkean datan laatuvaatimuksen ja suurten tiedonkäsittelyvaatimuksien, nopean laskentatehon ja sekä suuren virrankulutuksen vuoksi. Tämän tutkimuksen tavoitteena oli diagnosoida sydänsairauksia fonokardiogrammianalyysin (PCG) perusteella käyttämällä koneoppimistekniikoita niin, että käytettävä laskentateho rajoitetaan vastaamaan mobiililaitteiden kapasiteettia. PCG-diagnoosi tehtiin käyttäen kahta tekniikkaa 1. Parametrinen estimointi käyttäen moniulotteista luokitusta, erityisesti signaalien erotteluanalyysin avulla. Tätä asiaa tutkittiin syvällisesti käyttäen erilaisia tilastotieteellisesti kuvailevia piirteitä. Piirteiden irrotus suoritettiin käyttäen Wavelet-muunnosta ja suodatinpankkia. 2. Keinotekoisia neuroverkkoja ja erityisesti hahmontunnistusta. Tässä menetelmässä käytetään myös PCG-signaalin hajoitusta ja Wavelet-muunnos -suodatinpankkia. Tulokset osoittivat, että PCG 19dB:n signaali-kohina-suhteella voi johtaa 97,33% onnistuneeseen diagnoosiin käytettäessä ensimmäistä tekniikkaa. Signaalin hajottaminen neljään alikaistaan suoritettiin käyttämällä toisen asteen suodatinpankkia. Jokainen alikaista kuvattiin käyttäen kahta piirrettä: signaalin keskiarvoa ja kovarianssia, näin saatiin yhteensä kahdeksan ominaisuutta kuvaamaan noin yhden minuutin näytettä PCG-signaalista. Lisäksi tutkittiin ja verrattiin eriasteisia suodattimia ja piirteitä. Toista tekniikkaa käyttäen diagnoosi johti 100% onnistuneeseen luokitteluun 83,3% luotettavuustasolla. Tuloksia käsitellään ja pohditaan, sekä tehdään niistä johtopäätöksiä. Lopuksi ehdotetaan ja suositellaan käytettyihin menetelmiin uusia parannuksia jatkotutkimuskohteiksi.fi=vertaisarvioitu|en=peerReviewed

    Multimodal interaction for deliberate practice

    Get PDF

    Thirty-fourth Annual Symposium of Trinity College Undergraduate Research

    Get PDF
    2021 annual volume of abstracts for science research projects conducted by students at Trinity College

    Proceedings, MSVSCC 2019

    Get PDF
    Old Dominion University Department of Modeling, Simulation & Visualization Engineering (MSVE) and the Virginia Modeling, Analysis and Simulation Center (VMASC) held the 13th annual Modeling, Simulation & Visualization (MSV) Student Capstone Conference on April 18, 2019. The Conference featured student research and student projects that are central to MSV. Also participating in the conference were faculty members who volunteered their time to impart direct support to their students’ research, facilitated the various conference tracks, served as judges for each of the tracks, and provided overall assistance to the conference. Appreciating the purpose of the conference and working in a cohesive, collaborative effort, resulted in a successful symposium for everyone involved. These proceedings feature the works that were presented at the conference. Capstone Conference Chair: Dr. Yuzhong Shen Capstone Conference Student Chair: Daniel Pere

    Deep Learning Based Malware Classification Using Deep Residual Network

    Get PDF
    The traditional malware detection approaches rely heavily on feature extraction procedure, in this paper we proposed a deep learning-based malware classification model by using a 18-layers deep residual network. Our model uses the raw bytecodes data of malware samples, converting the bytecodes to 3-channel RGB images and then applying the deep learning techniques to classify the malwares. Our experiment results show that the deep residual network model achieved an average accuracy of 86.54% by 5-fold cross validation. Comparing to the traditional methods for malware classification, our deep residual network model greatly simplify the malware detection and classification procedures, it achieved a very good classification accuracy as well. The dataset we used in this paper for training and testing is Malimg dataset, one of the biggest malware datasets released by vision research lab of UCSB

    Using Real-Time Data Flux In Art – The Mediation Of A Situation As It Unfolds: RoadMusic – An Experimental Case Study.

    Full text link
    The practice driving this research is called RoadMusic. The project uses a small computer based system installed in a car that composes music from the flux of information it captures about the journey as it unfolds. It uses a technique known as sonification that consists of mapping data to sound. In the case of RoadMusic, this data capture is realtime, external to the computer and mobilised with the user. This dissertation investigates ways in which such a sonification can become an artistic form. To interrogate the specificity of an art of real-time it considers philosophical theories of the fundamental nature of time and immediacy and the ways in which the human mind ‘makes sense’ of this flux. After extending this scrutiny via theories of system and environment, it proceeds to extract concepts and principles leading to a possible art of real-time flux. Time, immediacy and the everyday are recurring questions in art and music, this study reviews practices that address these questions, essentially through three landmark composers of the twentieth century: Iannis Xenakis, John Cage and Murray Schafer. To gain precision in regards to the nature of musical listening it then probes theories of audio cognition and reflects on ways in which these can apply to real-time composing. The art of sonifying data extracted from the environment is arguably only as recent as the computer programs it depends on. This study reviews different practices that contribute towards a corpus of sonification-art, paying special attention to those practices where this process takes place in real-time. This is extended by an interrogation of the effect that mobility has on our listening experience. RoadMusic is now a fully functional device generating multi-timbral music from immediate data about its surroundings. This dissertation argues that this process can be an alternative to mainstream media systems; it describes how RoadMusic’s programs function and the ways in which they have evolved to incorporate the ideas developed in this thesis. It shows how RoadMusic is now developing beyond my own personal practice and how it intends to reach a wider audience

    Write a Book IQP

    Get PDF
    2050: The settlement on Mars has been cut off from Earth for nearly 5 years. In spite of their efforts to conserve what little food and water and oxygen they still have, they are running out of time... The Desperates back on Earth have mastered Darwinian survival, while the STEM-Heads have pursued a more discreet evasion of Death since the Collapse of 2045. Yet all of them dream of escaping from their overheated, overpopulated Hell called Home. As the mission to clean-up after First Mars leads a small STEM-Head band towards Kennedy Space Center, rumors of a distant paradise reach Desperate leaders, and, all of sudden, all eyes are back on Mars..

    Hearing morse, music, mountains and heart beats : a sociology of sensory knowing

    Get PDF
    We rely on our senses to make judgements and perform roles, whether these are mundane aspects of life such as road crossing, or the more specialised tasks of music-making and paediatric surgery. Taking the example of hearing, this thesis argues that it is useful to consider the senses as a form of knowledge, adopting Fredrik Barth's position that knowledges are avenues through which people actively engage with their worlds. In defining knowledge and the senses in these terms, this research is an exploratory contribution to both sensory studies and sociologies of knowledge. Based on participant observation and interviews with 92 musicians, doctors, adventurers and Morse code operators, the thesis begins by examining each epistemic community's underlying knowledge base, before exploring their learning methods and the conditions that support the development of aural acuity. It then explores the role of the senses in expert practices, illustrating their value in decision-making,particularly in critical contexts. This thesis argues that the senses are a dynamic and active form of knowledge that needs to be examined at the micro- and macro-sociological level, as well as across careers and lifespans. It illustrates how the senses are learnt, interactive, responsive and personal

    Interfaces de fala silenciosa multimodais para português europeu com base na articulação

    Get PDF
    Doutoramento conjunto MAPi em InformáticaThe concept of silent speech, when applied to Human-Computer Interaction (HCI), describes a system which allows for speech communication in the absence of an acoustic signal. By analyzing data gathered during different parts of the human speech production process, Silent Speech Interfaces (SSI) allow users with speech impairments to communicate with a system. SSI can also be used in the presence of environmental noise, and in situations in which privacy, confidentiality, or non-disturbance are important. Nonetheless, despite recent advances, performance and usability of Silent Speech systems still have much room for improvement. A better performance of such systems would enable their application in relevant areas, such as Ambient Assisted Living. Therefore, it is necessary to extend our understanding of the capabilities and limitations of silent speech modalities and to enhance their joint exploration. Thus, in this thesis, we have established several goals: (1) SSI language expansion to support European Portuguese; (2) overcome identified limitations of current SSI techniques to detect EP nasality (3) develop a Multimodal HCI approach for SSI based on non-invasive modalities; and (4) explore more direct measures in the Multimodal SSI for EP acquired from more invasive/obtrusive modalities, to be used as ground truth in articulation processes, enhancing our comprehension of other modalities. In order to achieve these goals and to support our research in this area, we have created a multimodal SSI framework that fosters leveraging modalities and combining information, supporting research in multimodal SSI. The proposed framework goes beyond the data acquisition process itself, including methods for online and offline synchronization, multimodal data processing, feature extraction, feature selection, analysis, classification and prototyping. Examples of applicability are provided for each stage of the framework. These include articulatory studies for HCI, the development of a multimodal SSI based on less invasive modalities and the use of ground truth information coming from more invasive/obtrusive modalities to overcome the limitations of other modalities. In the work here presented, we also apply existing methods in the area of SSI to EP for the first time, noting that nasal sounds may cause an inferior performance in some modalities. In this context, we propose a non-invasive solution for the detection of nasality based on a single Surface Electromyography sensor, conceivable of being included in a multimodal SSI.O conceito de fala silenciosa, quando aplicado a interação humano-computador, permite a comunicação na ausência de um sinal acústico. Através da análise de dados, recolhidos no processo de produção de fala humana, uma interface de fala silenciosa (referida como SSI, do inglês Silent Speech Interface) permite a utilizadores com deficiências ao nível da fala comunicar com um sistema. As SSI podem também ser usadas na presença de ruído ambiente, e em situações em que privacidade, confidencialidade, ou não perturbar, é importante. Contudo, apesar da evolução verificada recentemente, o desempenho e usabilidade de sistemas de fala silenciosa tem ainda uma grande margem de progressão. O aumento de desempenho destes sistemas possibilitaria assim a sua aplicação a áreas como Ambientes Assistidos. É desta forma fundamental alargar o nosso conhecimento sobre as capacidades e limitações das modalidades utilizadas para fala silenciosa e fomentar a sua exploração conjunta. Assim, foram estabelecidos vários objetivos para esta tese: (1) Expansão das linguagens suportadas por SSI com o Português Europeu; (2) Superar as limitações de técnicas de SSI atuais na deteção de nasalidade; (3) Desenvolver uma abordagem SSI multimodal para interação humano-computador, com base em modalidades não invasivas; (4) Explorar o uso de medidas diretas e complementares, adquiridas através de modalidades mais invasivas/intrusivas em configurações multimodais, que fornecem informação exata da articulação e permitem aumentar a nosso entendimento de outras modalidades. Para atingir os objetivos supramencionados e suportar a investigação nesta área procedeu-se à criação de uma plataforma SSI multimodal que potencia os meios para a exploração conjunta de modalidades. A plataforma proposta vai muito para além da simples aquisição de dados, incluindo também métodos para sincronização de modalidades, processamento de dados multimodais, extração e seleção de características, análise, classificação e prototipagem. Exemplos de aplicação para cada fase da plataforma incluem: estudos articulatórios para interação humano-computador, desenvolvimento de uma SSI multimodal com base em modalidades não invasivas, e o uso de informação exata com origem em modalidades invasivas/intrusivas para superar limitações de outras modalidades. No trabalho apresentado aplica-se ainda, pela primeira vez, métodos retirados do estado da arte ao Português Europeu, verificando-se que sons nasais podem causar um desempenho inferior de um sistema de fala silenciosa. Neste contexto, é proposta uma solução para a deteção de vogais nasais baseada num único sensor de eletromiografia, passível de ser integrada numa interface de fala silenciosa multimodal
    corecore