128 research outputs found

    The 1st International Conference on Computational Engineering and Intelligent Systems

    Get PDF
    Computational engineering, artificial intelligence and smart systems constitute a hot multidisciplinary topic contrasting computer science, engineering and applied mathematics that created a variety of fascinating intelligent systems. Computational engineering encloses fundamental engineering and science blended with the advanced knowledge of mathematics, algorithms and computer languages. It is concerned with the modeling and simulation of complex systems and data processing methods. Computing and artificial intelligence lead to smart systems that are advanced machines designed to fulfill certain specifications. This proceedings book is a collection of papers presented at the first International Conference on Computational Engineering and Intelligent Systems (ICCEIS2021), held online in the period December 10-12, 2021. The collection offers a wide scope of engineering topics, including smart grids, intelligent control, artificial intelligence, optimization, microelectronics and telecommunication systems. The contributions included in this book are of high quality, present details concerning the topics in a succinct way, and can be used as excellent reference and support for readers regarding the field of computational engineering, artificial intelligence and smart system

    Classification Methods For Motor Imagery Based Brain Computer Interfaces

    Get PDF
    Tez (Doktora) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2016Thesis (PhD) -- İstanbul Technical University, Institute of Science and Technology, 2016Beyin bilgisayar ara yüzü (BBA), son yıllarda oldukça gelişme sağlayan bir araştırma konusudur. Oyun ekipmanlarından yapay organlara kadar çok çeşitli alanlarda kullanım alanlarına sahip BBA teknolojisinin temel amacı, BBA kullanıcısının beyni ve elektronik bir cihaz arasında herhangi bir çevresel sinir yollarına bağlı olmayan aracısız bir haberleşme kanalı kurmaktır. Motor hareket hayali (MHH), kullanıcının, motor bir hareketi hayal etmesi sırasında alınan beyin sinyallerinden o hareketin tahmin edilmesi esasına dayanan bir BBA yöntemidir. Bağımsız bir BBA türü olması ve pratik olması gibi nedenlerden dolayı, motor hayali çeşitli BBA türleri arasında en popüler olanıdır. Motor hareket hayali sinyalleri beyinin motor korteks olarak adlandırılan, istemli hareketlerden sorumlu bölgesinden elde edilir. Bu sinyallerin alınması için fonksiyonel manyetik rezonans görüntüleme (fMRI), pozitron emisyon tomografi (PET), Elektrokortikogram (EKoG) ya da Elektroansefalografi (EEG) gibi işaret alma metotları kullanılabilir. Bu sinyal türleri içerisinde pratik, ucuz, hızlı ve girişimsiz bir yöntem olduğundan, genellikle EEG tercih edilir. Popüler olmasına rağmen, motor hareket hayali işaretlerinin sınıflandırılması oldukça zordur. Bunun temel nedeni ise, düşük uzamsal çözünürlüktür. Düşük uzamsal çözünürlük nedeniyle motor hareket hayali ile ilişkili sinyaller beynin farklı bölgelerinde bulunan başka sinyal kaynakları ile karışır ve bu, elde edilen EEG sinyalinden motor hareket hayali sinyallerinin ortaya çıkarılmasını güçleştirir. Ayrıca motor hareket hayali sinyal karakteristiklerinin kişiden kişiye hatta aynı kişi için zamanla değişebilir olması, sınıf sayısının sınırlı olması, EEG işaretinin durağan olmaması ve deneklerin motor hareketlerin hayal edilmesi konusunda tecrübesiz olması da bu tarz işaretlerin sınıflandırılmasını güçleştiren unsurlardandır. Tezin giriş kısmında BBA hakkında temel bilgiler ve önemli BBA metotlarından bahsedilmiştir. Bu BBA metotları şu şekilde sıralanabilir: i) Durağan görsel uyarılmış potansiyel (Steady state visual evoked potentials) tabanlı BBA, ii) P300 tabanlı BBA, iii) Yavaş kortikal potansiyeller (Slow cortical potentials) tabanlı BBA, iv) Kortex-neron aktivasyon potansiyeli (Cortical-neuronal activation potentials) tabanlı BBA, v) Motor hareket hayali (Motor imagery) tabanlı BBA. Tez çalışması konusu motor hareket hayali olduğu için, MH hakkında detaylı bilgiler verilmiştir. MH sinyallerinin fizyolojik temelleri, sinyal karakteristikleri, MH sinyallerinin işlenmesi sırasında karşılaşılan zorluklar gibi konulara değinilmiştir. Ardından, motor hareket hayali işaretlerinin sınıflandırılmasına yönelik ayrıntılı bir literatür araştırması sunulmuştur. Motor hareket hayali sırasında, motor korteks bölgesinde olay ilişki senkronizasyon (event related synchronisation, ERS) ve olay ilişkili desenkronizasyon (event related desynchronisation, ERD) olarak adlandırılan güç değişimleri meydana gelir. ERD, belirli bir frekans bandında ölçülen işaretteki güç düşümüne, ERS ise belirli bir frekansta ölçülen işaretteki güç artışına karşılık gelir. Motor hareket hayali sırasında en belirleyici işaret, 8-16 Hz arasındaki µ bandındaki güç düşümüdür. Ayrıca 20-30 Hz arasında da ERS işaretleri motor hareket hayali ile birlikte görülmektedir. Çalışmada motor hareket hayali olarak adlandırılan, kişinin kaslarını hareket ettirmesi ya da ettirmeye niyetlenmesi sırasında beynin motor korteks bölgesinde ortaya çıkan güç değişimlerini analiz eden beyin bilgisayar ara yüzü konusunda mevcut sınıflandırma metotları araştırılmış ve tez çalışmasında yeni metotlar geliştirilmiştir. Bu çalışmada, motor hareket hayali işaretlerinin sınıflandırılması için yeni metotlar geliştirilmiştir. Bu amaçla literatürdeki mevcut metotlar ile beraber, tez kapsamında geliştirilen metotlar sunulmuş ve tüm bu metotların sınıflandırma performansları incelenmiştir. Metotlar kısmında, MH sınıflandırmasına yönelik literatürdeki belli başlı yöntemler anlatılmıştır. Öncelikle, MH sınıflandırmasına yönelik genel bir çerçeve çizilmiş, ardından, her bir işlem adımı detaylı bir biçimde, literatürdeki mevcut yayınlardan bahsedilerek anlatılmıştır. MH sınıflandırmada çok önemli bir uzamsal sınıflandırma metodu olan "Ortak uzamsal örüntüler" (Common Spatial Patterns, CSP) metodu anlatılmış ve CSP metoduna yapılan iyileştirmelerden bahsedilmiştir. Metotlar kısımda, Tezin katkılarından ilki olan "Görev ilişkili & uzamsal düzenlemeli ortak uzamsal örüntüler" (Task Related & Spatially Regulaized Common Spatial Patterns, TR&SR-CSP) isimli çalışma anlatılmıştır. Bu çalışmada düzenlenmiş bir CSP metodu önerilmiştir. Metot motor hareket hayali sinyallerinin beyindeki oluşum noktalarını kullanan bir düzenlenmiş (regularized) CSP metodudur. Bu metotta, uzamsal filtrelerin eğitimi sırasında özel olarak hazırlanmış bir ceza matrisi oluşturma algoritması tanıtılmıştır. Bu ceza matrisi, verilen görevlere ilişkin motor korteksteki konumları göz önünde bulundurarak uzamsal filtrelerin korteks üzerinde bu bölgelere odaklanmasını sağlamıştır. Çalışma sonuçları incelendiğinde, fizyolojik verilerle uyumlu sonuçların elde edildiği gözlemlenir. Çalışma 2014 senesinde biyo-informatik ve biyomedikal mühendisliği uluslar arası konferansı" (IWBBIO) konferansında sunulmuştur. Metotlar kısmında ikinci olarak CSP'nin eksikliklerine değinilerek "Uzamsal filtre ağı" (Spatial Filter Network, SFN) metodu sunulmuştur. Bu metot, bir uzamsal filtre ve bir sınıflandırıcının birlikte optimizasyonunu sağlayan çok katmanlı bir yapıdır. Önerilen yöntem, CSP metodunun iki problemini adresler ve bunlara çözüm arar. Bu problemler, i) CSP metodunun yalnızca sınıflar arası saçılımları iyileştirmesi, buna rağmen, sınıf içi saçılımlar ile ilgilenmemesi, ii)CSP metodunun sınıflandırma performansı ile değil, verilen optimizasyon fonksiyonunu iyileştirmeye çalışmasıdır. SFN ise eğitim kümesindeki her elemanı tek tek ağa sunarak, hem uzamsal filtreyi, hem de sınıflandırıcıyı eğitir. SFN ağının eğitimi için yapay sinir ağlarında kullanılan geriye yayılım yöntemi kullanılmıştır. Bunun için ağa sunulan her eğitim kümesi elemanı için ağın oluşturduğu çıkış incelenmiş ve hem uzamsal filtre ağırlıkları, hem de sınıflandırıcı ağrırlıkları güncellenmiştir. Optimizasyon yöntemi olarak yapay sinir ağlarının eğitiminde kullanılan Levenberg-Marquardt (LM) ve back propogation (BP) metotlarından yararlanılmıştır. Tez içersinde SFN metodunun çalıştırılmasına ve eğitimine yönelik matematiksel denklemler sunulmuştur. SFN metoduna ilişkin yayın, PLoS One isimli dergide yayınlanmıştır. Metotlar kısmında son olarak uzamsal – spektral filtreleme metotlarına değinilmiştir. Bu metotlar hem uzamsal hem de spektral düzlemde optimizasyonlar yapmaktadırlar. CSP basitliği ile beraber güçlü bir metot olmasına karşın, bazı eksiklikleri vardır. Motor hareket hayali tabanlı beyin bilgisayar ara yüzlerinde CSP'nin başarısı büyük oranda ERD (olay tabanlı desenkronizasyon) ve ERS (olay tabanlı senkronizasyon) olarak adlandırılan fizyolojik fenomenlere bağlıdır. Halbuki pratikte ERD'nin bulunduğu frekans bandı kişiden kişiye farklılık gösterir. Bu, pratik bir BCI tasarlarken karşılaşılan en büyük problemlerden biridir. Yakın zamana kadar CSP kullanılırken frekans bandı ya geniş bant kullanılarak tanımsız bırakılmaktaydı ya da manüel ayarlanmaktaydı. Genel olarak, CSP'yi EEG işaretini filtrelemeden ya da uygun olmayan bir frekans bandında filtreleyerek uygulamak düşük bir sınıflandırma başarımı verecektir. Bu durumda yapılacak bir iş, zaman harcayıcı bir araştırmalar ve bazı manüel ayarlamalar ile her bir denek için en iyi frekans bandını bulmak olacaktır. Bu şekilde sınıflandırmanın başarımı artırdığı gösterilmiş olsa da, zaman harcayıcı ve zahmetli bir iştir. Bu nedenle son zamanlarda uzamsal filtrelerin frekans filtreleri ile eş zamanlı optimizasyonuna ilişkin yöntemlerin araştırılması oldukça önem kazanmıştır. Bu nedenlerden dolayı, CSP gibi sadece uzamsal düzlemde çalışan metotlar yerine filtrelerin spektral karakteristiklerinin de otomatik olarak iyileştirilmesi amaçlanıştır. Literatürdeki mevcut spatio-spectral metotlar anlatılmış ve tezin son çıktısı olan "Filtre bankası temelli ortak uzamsal örüntüler" (Filter bank common spatio spectral patterns, FBCSSP) isimli, hem spektral hem de uzamsal düzlemde filtre iyileştirilmesi yapan bir metot geliştirilmiştir. Sunulan metot, çeşitli frekanslarda filtreleme yapan bir filtre bankası ve arka arkaya dizilmiş iki adet CSP katmanından oluşur. İlk CSP katmanı, her bir filtre bankası çıkışını uzamsal olarak filtreler böylece, EEG işareti dar bantlarda uzamsal filtrelenmiş olur. İkinci CSP katmanı ise ilk katmandan gelen uzamsal filtrelenmiş işaretleri alarak en önemli işaretleri ortaya çıkartmaya çalışır. Bu nedenle ikinci katman bir nevi frekans seçimi yapmaktadır. İki CSP katmanı ise spatio-spektral bir filtre yapısı oluşturmuş olur. Sonuçlar incelendiğinde, yüksek sınıflandırma başarımlarına ulaşılabildiği görülmektedir. Sunulan çalışma "Biyo-medikal ve biyo informatik alanlarında bilgi teknolojileri" (ITBAM 2016) isimli konferansta sunulmak üzere kabul almıştır. Çalışma "Bilgisayar bilimlerinde konferans notları" (LNCS) isimli dergide yayınlanacaktır. Sonuçlar kısmında, kullanılan veri kaynaklarından bahsedilmiş, veri kümelerinin özelliklerinden bahsedilmiştir. Daha sonra, sonuçların elde edilmesine yönelik bir çerçeve sunulmuş ve yapılacak değerlendirmeler anlatılmıştır. Ayrıca sonuçlar elde edilirken kullanılan metotlara ilişkin bütün parametre ayarlamaları detaylıca sunulmuştur. Sonuçlar kısmında hem sayısal hem de görsel sonuçlar karşılaştırmalı olarak verilmiştir. Sonuçlar incelendiğinde, önerilen metotların başarılı sonuçlar elde ettiği görülmüştür. Literatürdeki diğer metotlara ilişkin sonuçlar ile değerlendirildiğinde, önerilmiş metotlardan elde edilen sınıflandırma performansları ümit vericidir. Önerilen metotların çalışılan veri kümelerinde performansı yukarı çektiği görülmektedir. Sayısal performans değerlendirmesinin yanında ayrıca, önerilen metotların motor hareket hayali fizyolojisi ile uygunluğu elde edilen uzamsal ve spektral filtrelerin analiz edilmesi ile gözlemlenmiştir. Bütün bu sonuçlar önerilen metotların etkili ve başarılı olduğunu göstermektedir.Brain computer interfacing (BCI) is an emerging topic which is applied to several areas from gaming equipment to health assistive devices. BCI technology aims establishing a direct communication pathway between the user's brain and any electronic device. Motor imagery is a BCI methodology in which the user's imagining of moving a limb is detected without any actual physical movement. Among different BCI techniques, motor imagery is the most popular BCI methodology because of its practicality and being an independent BCI method. Generally, electroencephalogram (EEG) is used for acquiring motor imagery signals since it is a practical, cheap, fast and non-invasive technique for analyzing brain signals. However, classification of motor imagery signals is a challenging topic. Poor spatial resolution of EEG signal makes it difficult to clearly extract motor imagery signals directly. Poor spatial resolution causes motor imagery signals to be mixed up with the signals from the signal sources in the brain which are much stronger. In this study, novel methods for classification of motor imagery signals were developed. For this purpose, existing methods and proposed methods were presented and their classification performances were analyzed. In this thesis, firstly, BCI concept and main BCI methodologies were presented. Motor imagery paradigm and physiological sources and main properties of motor imagery signals were described. Then, an extensive literature review about classification of motor imagery signals was exhibited. Next, the state of art method in the motor imagery classification called common spatial patterns (CSP) method was analyzed and then, regularized CSP methods which addresses some drawbacks of CSP were described. Next, the first contribution of this thesis, task related & spatially regularized CSP method was presented as a regularized CSP algorithm. After that, the second contribution of this thesis, a spatial filtering and classification structure named spatial filter network (SFN) method was presented. After presenting the spatial filtering algorithms, spectral and spatial filtering methodologies were presented. In this manner, a spatio-spectral filtering method called filter bank common spatio-spectral patterns (FBCSSP) method was proposed. Before running the proposed methods, datasets used in the study were introduced. Then, selected configurations of the methods were described. Obtained results of the proposed methods of this study are promising. Their performance evaluations were reported along with important methods from the literature. Developed methods increased the classification performance of the given datasets. Also the physiological suitability of the proposed methods was demonstrated by analyzing obtained spatial and spectral filters. Results showed the effectiveness of the proposed methods.DoktoraPh

    Engineering Local Electricity Markets for Residential Communities

    Get PDF
    In line with the progressing decentralization of electricity generation, local electricity markets (LEMs) support electricity end customers in becoming active market participants instead of passive price takers. They provide a market platform for trading locally generated (renewable) electricity between residential agents (consumers, prosumers, and producers) within their community. Based on a structured literature review, a market engineering framework for LEMs is developed. The work focuses on two of the framework\u27s eight components, namely the agent behavior and the (micro) market structure. Residential agent behavior is evaluated in two steps. Firstly, two empirical studies, a structural equation model-based survey with 195 respondents and an adaptive choice-based conjoint study with 656 respondents, are developed, conducted and evaluated. Secondly, a discount price LEM is designed following the surveys\u27 results. Theoretical solutions of the LEM bi-level optimization problem with complete information and heuristic reinforcement learning with incomplete information are investigated in a multi-agent simulation to find the profit-maximizing market allocations. The (micro) market structure is investigated with regards to LEM business models, information systems and real-world application projects. Potential business models and their characteristics are combined in a taxonomy based on the results of 14 expert interviews. Then, the Smart Grid Architecture Model is utilized to derive the organizational, informational, and technical requirements for centralized and distributed information systems in LEMs. After providing an overview on current LEM implementations projects in Germany, the Landau Microgrid Project is used as an example to test the derived requirements. In conclusion, the work recommends current LEM projects to focus on overall discount electricity trading. Premium priced local electricity should be offered to subgroups of households with individual higher valuations for local generation. Automated self-learning algorithms are needed to mitigate the trading effort for residential LEM agents in order to ensure participation. The utilization of regulatory niches is suggested until specific regulations for LEMs are established. Further, the development of specific business models for LEMs should become a prospective (research) focus

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    Information recovery in the biological sciences : protein structure determination by constraint satisfaction, simulation and automated image processing

    Get PDF
    Regardless of the field of study or particular problem, any experimental science always poses the same question: ÒWhat object or phenomena generated the data that we see, given what is known?Ó In the field of 2D electron crystallography, data is collected from a series of two-dimensional images, formed either as a result of diffraction mode imaging or TEM mode real imaging. The resulting dataset is acquired strictly in the Fourier domain as either coupled Amplitudes and Phases (as in TEM mode) or Amplitudes alone (in diffraction mode). In either case, data is received from the microscope in a series of CCD or scanned negatives of images which generally require a significant amount of pre-processing in order to be useful. Traditionally, processing of the large volume of data collected from the microscope was the time limiting factor in protein structure determination by electron microscopy. Data must be initially collected from the microscope either on film-negatives, which in turn must be developed and scanned, or from CCDs of sizes typically no larger than 2096x2096 (though larger models are in operation). In either case, data are finally ready for processing as 8-bit, 16-bit or (in principle) 32-bit grey-scale images. Regardless of data source, the foundation of all crystallographic methods is the presence of a regular Fourier lattice. Two dimensional cryo-electron microscopy of proteins introduces special challenges as multiple crystals may be present in the same image, producing in some cases several independent lattices. Additionally, scanned negatives typically have a rectangular region marking the film number and other details of image acquisition that must be removed prior to processing. If the edges of the images are not down-tapered, vertical and horizontal ÒstreaksÓ will be present in the Fourier transform of the image --arising from the high-resolution discontinuities between the opposite edges of the image. These streaks can overlap with lattice points which fall close to the vertical and horizontal axes and disrupt both the information they contain and the ability to detect them. Lastly, SpotScanning (Downing, 1991) is a commonly used process where-by circular discs are individually scanned in an image. The large-scale regularity of the scanning patter produces a low frequency lattice which can interfere and overlap with any protein crystal lattices. We introduce a series of methods packaged into 2dx (Gipson, et al., 2007) which simultaneously addresses these problems, automatically detecting accurate crystal lattice parameters for a majority of images. Further a template is described for the automation of all subsequent image processing steps on the road to a fully processed dataset. The broader picture of image processing is one of reproducibility. The lattice parameters, for instance, are only one of hundreds of parameters which must be determined or provided and subsequently stored and accessed in a regular way during image processing. Numerous steps, from correct CTF and tilt-geometry determination to the final stages of symmetrization and optimal image recovery must be performed sequentially and repeatedly for hundreds of images. The goal in such a project is then to automatically process as significant a portion of the data as possible and to reduce unnecessary, repetitive data entry by the user. Here also, 2dx (Gipson, et al., 2007), the image processing package designed to automatically process individual 2D TEM images is introduced. This package focuses on reliability, ease of use and automation to produce finished results necessary for full three-dimensional reconstruction of the protein in question. Once individual 2D images have been processed, they contribute to a larger project-wide 3-dimensional dataset. Several challenges exist in processing this dataset, besides simply the organization of results and project-wide parameters. In particular, though tilt-geometry, relative amplitude scaling and absolute orientation are in principle known (or obtainable from an individual image) errors, uncertainties and heterogeneous data-types provide for a 3D-dataset with many parameters to be optimized. 2dx_merge (Gipson, et al., 2007) is the follow-up to the first release of 2dx which had originally processed only individual images. Based on the guiding principles of the earlier release, 2dx_merge focuses on ease of use and automation. The result is a fully qualified 3D structure determination package capable of turning hundreds of electron micrograph images, nearly completely automatically, into a full 3D structure. Most of the processing performed in the 2dx package is based on the excellent suite of programs termed collectively as the MRC package (Crowther, et al., 1996). Extensions to this suite and alternative algorithms continue to play an essential role in image processing as computers become faster and as advancements are made in the mathematics of signal processing. In this capacity, an alternative procedure to generate a 3D structure from processed 2D images is presented. This algorithm, entitled ÒProjective Constraint OptimizationÓ (PCO), leverages prior known information, such as symmetry and the fact that the protein is bound in a membrane, to extend the normal boundaries of resolution. In particular, traditional methods (Agard, 1983) make no attempt to account for the Òmissing coneÓ a vast, un-sampled, region in 3D Fourier space arising from specimen tilt limitations in the microscope. Provided sufficient data, PCO simultaneously refines the dataset, accounting for error, as well as attempting to fill this missing cone. Though PCO provides a near-optimal 3D reconstruction based on data, depending on initial data quality and amount of prior knowledge, there may be a host of solutions, and more importantly pseudo-solutions, which are more-or-less consistent with the provided dataset. Trying to find a global best-fit for known information and data can be a daunting challenge mathematically, to this end the use of meta-heuristics is addressed. Specifically, in the case of many pseudo-solutions, so long as a suitably defined error metric can be found, quasi-evolutionary swarm algorithms can be used that search solution space, sharing data as they go. Given sufficient computational power, such algorithms can dramatically reduce the search time for global optimums for a given dataset. Once the structure of a protein has been determined, many questions often remain about its function. Questions about the dynamics of a protein, for instance, are not often readily interpretable from structure alone. To this end an investigation into computationally optimized structural dynamics is described. Here, in order to find the most likely path a protein might take through Òconformation spaceÓ between two conformations, a graphics processing unit (GPU) optimized program and set of libraries is written to speed of the calculation of this process 30x. The tools and methods developed here serve as a conceptual template as to how GPU coding was applied to other aspects of the work presented here as well as GPU programming generally. The final portion of the thesis takes an apparent step in reverse, presenting a dramatic, yet highly predictive, simplification of a complex biological process. Kinetic Monte Carlo simulations idealize thousands of proteins as interacting agents by a set of simple rules (i.e. react/dissociate), offering highly-accurate insights into the large-scale cooperative behavior of proteins. This work demonstrates that, for many applications, structure, dynamics or even general knowledge of a protein may not be necessary for a meaningful biological story to emerge. Additionally, even in cases where structure and function is known, such simulations can help to answer the biological question in its entirety from structure, to dynamics, to ultimate function

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    [Activity of Institute for Computer Applications in Science and Engineering]

    Get PDF
    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Motion Artifact Processing Techniques for Physiological Signals

    Get PDF
    The combination of reducing birth rate and increasing life expectancy continues to drive the demographic shift toward an ageing population and this is placing an ever-increasing burden on our healthcare systems. The urgent need to address this so called healthcare \time bomb" has led to a rapid growth in research into ubiquitous, pervasive and distributed healthcare technologies where recent advances in signal acquisition, data storage and communication are helping such systems become a reality. However, similar to recordings performed in the hospital environment, artifacts continue to be a major issue for these systems. The magnitude and frequency of artifacts can vary signicantly depending on the recording environment with one of the major contributions due to the motion of the subject or the recording transducer. As such, this thesis addresses the challenges of the removal of this motion artifact removal from various physiological signals. The preliminary investigations focus on artifact identication and the tagging of physiological signals streams with measures of signal quality. A new method for quantifying signal quality is developed based on the use of inexpensive accelerometers which facilitates the appropriate use of artifact processing methods as needed. These artifact processing methods are thoroughly examined as part of a comprehensive review of the most commonly applicable methods. This review forms the basis for the comparative studies subsequently presented. Then, a simple but novel experimental methodology for the comparison of artifact processing techniques is proposed, designed and tested for algorithm evaluation. The method is demonstrated to be highly eective for the type of artifact challenges common in a connected health setting, particularly those concerned with brain activity monitoring. This research primarily focuses on applying the techniques to functional near infrared spectroscopy (fNIRS) and electroencephalography (EEG) data due to their high susceptibility to contamination by subject motion related artifact. Using the novel experimental methodology, complemented with simulated data, a comprehensive comparison of a range of artifact processing methods is conducted, allowing the identication of the set of the best performing methods. A novel artifact removal technique is also developed, namely ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA), which provides the best results when applied on fNIRS data under particular conditions. Four of the best performing techniques were then tested on real ambulatory EEG data contaminated with movement artifacts comparable to those observed during in-home monitoring. It was determined that when analysing EEG data, the Wiener lter is consistently the best performing artifact removal technique. However, when employing the fNIRS data, the best technique depends on a number of factors including: 1) the availability of a reference signal and 2) whether or not the form of the artifact is known. It is envisaged that the use of physiological signal monitoring for patient healthcare will grow signicantly over the next number of decades and it is hoped that this thesis will aid in the progression and development of artifact removal techniques capable of supporting this growth

    Wearable and Nearable Biosensors and Systems for Healthcare

    Get PDF
    Biosensors and systems in the form of wearables and “nearables” (i.e., everyday sensorized objects with transmitting capabilities such as smartphones) are rapidly evolving for use in healthcare. Unlike conventional approaches, these technologies can enable seamless or on-demand physiological monitoring, anytime and anywhere. Such monitoring can help transform healthcare from the current reactive, one-size-fits-all, hospital-centered approach into a future proactive, personalized, decentralized structure. Wearable and nearable biosensors and systems have been made possible through integrated innovations in sensor design, electronics, data transmission, power management, and signal processing. Although much progress has been made in this field, many open challenges for the scientific community remain, especially for those applications requiring high accuracy. This book contains the 12 papers that constituted a recent Special Issue of Sensors sharing the same title. The aim of the initiative was to provide a collection of state-of-the-art investigations on wearables and nearables, in order to stimulate technological advances and the use of the technology to benefit healthcare. The topics covered by the book offer both depth and breadth pertaining to wearable and nearable technology. They include new biosensors and data transmission techniques, studies on accelerometers, signal processing, and cardiovascular monitoring, clinical applications, and validation of commercial devices
    corecore