543 research outputs found

    Polymorphic computing abstraction for heterogeneous architectures

    Get PDF
    Integration of multiple computing paradigms onto system on chip (SoC) has pushed the boundaries of design space exploration for hardware architectures and computing system software stack. The heterogeneity of computing styles in SoC has created a new class of architectures referred to as Heterogeneous Architectures. Novel applications developed to exploit the different computing styles are user centric for embedded SoC. Software and hardware designers are faced with several challenges to harness the full potential of heterogeneous architectures. Applications have to execute on more than one compute style to increase overall SoC resource utilization. The implication of such an abstraction is that application threads need to be polymorphic. Operating system layer is thus faced with the problem of scheduling polymorphic threads. Resource allocation is also an important problem to be dealt by the OS. Morphism evolution of application threads is constrained by the availability of heterogeneous computing resources. Traditional design optimization goals such as computational power and lower energy per computation are inadequate to satisfy user centric application resource needs. Resource allocation decisions at application layer need to permeate to the architectural layer to avoid conflicting demands which may affect energy-delay characteristics of application threads. We propose Polymorphic computing abstraction as a unified computing model for heterogeneous architectures to address the above issues. Simulation environment for polymorphic applications is developed and evaluated under various scheduling strategies to determine the effectiveness of polymorphism abstraction on resource allocation. User satisfaction model is also developed to complement polymorphism and used for optimization of resource utilization at application and network layer of embedded systems

    Energy efficient enabling technologies for semantic video processing on mobile devices

    Get PDF
    Semantic object-based processing will play an increasingly important role in future multimedia systems due to the ubiquity of digital multimedia capture/playback technologies and increasing storage capacity. Although the object based paradigm has many undeniable benefits, numerous technical challenges remain before the applications becomes pervasive, particularly on computational constrained mobile devices. A fundamental issue is the ill-posed problem of semantic object segmentation. Furthermore, on battery powered mobile computing devices, the additional algorithmic complexity of semantic object based processing compared to conventional video processing is highly undesirable both from a real-time operation and battery life perspective. This thesis attempts to tackle these issues by firstly constraining the solution space and focusing on the human face as a primary semantic concept of use to users of mobile devices. A novel face detection algorithm is proposed, which from the outset was designed to be amenable to be offloaded from the host microprocessor to dedicated hardware, thereby providing real-time performance and reducing power consumption. The algorithm uses an Artificial Neural Network (ANN), whose topology and weights are evolved via a genetic algorithm (GA). The computational burden of the ANN evaluation is offloaded to a dedicated hardware accelerator, which is capable of processing any evolved network topology. Efficient arithmetic circuitry, which leverages modified Booth recoding, column compressors and carry save adders, is adopted throughout the design. To tackle the increased computational costs associated with object tracking or object based shape encoding, a novel energy efficient binary motion estimation architecture is proposed. Energy is reduced in the proposed motion estimation architecture by minimising the redundant operations inherent in the binary data. Both architectures are shown to compare favourable with the relevant prior art

    잔차 합성곱 신경망을 통한 산업용 로봇 기어박스의 동작 적응형 퓨샷 고장 감지 방법

    Get PDF
    학위논문 (석사) -- 서울대학교 대학원 : 공과대학 기계공학부, 2020. 8. 윤병동.Nowadays, industrial robots are indispensable equipment for automated manufacturing processes because they can perform repetitive tasks with consistent precision and accuracy. However, when faults occur in the industrial robot, it can lead to the unexpected shutdown of the production line, which brings significant economic losses, so the fault detection is important. The gearbox, one of the main drivetrain components of an industrial robot, is often subjected to high torque loads, and faults occur frequently. When faults occur in the gearbox, the amplitude and frequency of the torque signal are modulated, which leads to changes in the characteristics of the torque signal. Although several previous studies have proposed fault detection methods for industrial robots using torque signals, it is still a challenge to extract fault-related features under various environmental and operating conditions and to detect faults in the complex motions used in industrial sites To overcome such difficulties, in this paper, we propose a novel motion-adaptive few-shot (MAFS) fault detection method of industrial robot gearboxes using torque ripples via a one-dimensional (1D) residual-convolutional neural network (Res-CNN) and binary-supervised domain adaptation (BSDA). The overall procedure of the proposed method is as follows. First, applying the moving average filtering to the torque signal to extract the data trend, and the torque ripples of the high-frequency band are obtained as a residual value between the original signal and the filtered signal. Second, classifying the state of pre-processed torque ripples under various operating and environmental conditions. It is shown that Res-CNN network 1) distinguishes small differences between normal and fault torque ripples effectively, and 2) focuses on important regions of the input data by the attention effect. Third, after constructing the Siamese network with a pre-trained network in the source domain, which consisted of simple motions, detecting the faults on the target domain, which consisted of complex motions through BSDA. As a result, 1) the similarities of the jointly shared physical mechanisms of torque ripples between simple and complex motions are learned, and 2) faults of the gearbox are adaptively detected while the industrial robot executes complex motions. The proposed method showed the most superior accuracy over other deep learning-based methods in few-shot conditions where only one cycle of each normal and fault data of complex motions is available. In addition, the transferable regions on the torque ripples after domain adaptation was highlighted using 1D guided grad-CAM. The effectiveness of the proposed method was validated with experimental data of multi-axial welding motions in constant and transient speed, which are commonly executed in real-industrial fields such as the automobile manufacturing line. Furthermore, it is expected that the proposed method is applicable to other types of motions, such as inspection, painting, assembly, and so on. The source code is available on my GitHub page of https://github.com/oyt9306/MAFS.Chapter 1. Introduction 1 1.1 Research Motivation 1 1.2 Scope of Research 4 1.3 Thesis Layout 5 Chapter 2. Research Backgrounds 6 2.1 Interpretations of Torque Ripples 6 2.1.1. Causes of torque ripples 6 2.1.1. Modulations on torque ripples due to gearbox faults 8 2.2 Architectures of Res-CNN 11 2.2.1 Convolutional Operation 11 2.2.2 Pooling Operation 12 2.2.3 Activation 13 2.2.4 Batch Normalization 13 2.2.5 Residual Learning 15 2.3 Domain Adaptation (DA) 17 2.3.1 Few-shot domain adaptation 18 Chapter 3. Motion-Adaptive Few-Shot (MAFS) Fault Detection Method 20 3.1 Pre-processing 23 3.2 Network Pre-training 28 3.3 Binary-Supervised Domain Adaptation (BSDA) 31 Chapter 4. Experimental Validations 37 4.1 Experimental Settings 37 4.2 Pre-trained Network Generation 40 4.3 Motion-Adaptation with Few-Shot Learning 43 Chapter 5. Conclusion and Future Work 52 5.1 Conclusion 52 5.2 Contribution 52 5.3 Future Work 54 Bibliography 55 Appendix A. 1D Guided Grad-CAM 60 국문 초록 62Maste

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    Low Power Circuits for Smart Flexible ECG Sensors

    Get PDF
    Cardiovascular diseases (CVDs) are the world leading cause of death. In-home heart condition monitoring effectively reduced the CVD patient hospitalization rate. Flexible electrocardiogram (ECG) sensor provides an affordable, convenient and comfortable in-home monitoring solution. The three critical building blocks of the ECG sensor i.e., analog frontend (AFE), QRS detector, and cardiac arrhythmia classifier (CAC), are studied in this research. A fully differential difference amplifier (FDDA) based AFE that employs DC-coupled input stage increases the input impedance and improves CMRR. A parasitic capacitor reuse technique is proposed to improve the noise/area efficiency and CMRR. An on-body DC bias scheme is introduced to deal with the input DC offset. Implemented in 0.35m CMOS process with an area of 0.405mm2, the proposed AFE consumes 0.9W at 1.8V and shows excellent noise effective factor of 2.55, and CMRR of 76dB. Experiment shows the proposed AFE not only picks up clean ECG signal with electrodes placed as close as 2cm under both resting and walking conditions, but also obtains the distinct -wave after eye blink from EEG recording. A personalized QRS detection algorithm is proposed to achieve an average positive prediction rate of 99.39% and sensitivity rate of 99.21%. The user-specific template avoids the complicate models and parameters used in existing algorithms while covers most situations for practical applications. The detection is based on the comparison of the correlation coefficient of the user-specific template with the ECG segment under detection. The proposed one-target clustering reduced the required loops. A continuous-in-time discrete-in-amplitude (CTDA) artificial neural network (ANN) based CAC is proposed for the smart ECG sensor. The proposed CAC achieves over 98% classification accuracy for 4 types of beats defined by AAMI (Association for the Advancement of Medical Instrumentation). The CTDA scheme significantly reduces the input sample numbers and simplifies the sample representation to one bit. Thus, the number of arithmetic operations and the ANN structure are greatly simplified. The proposed CAC is verified by FPGA and implemented in 0.18m CMOS process. Simulation results show it can operate at clock frequencies from 10KHz to 50MHz. Average power for the patient with 75bpm heart rate is 13.34W

    Objective Assessment of Machine Learning Algorithms for Speech Enhancement in Hearing Aids

    Get PDF
    Speech enhancement in assistive hearing devices has been an area of research for many decades. Noise reduction is particularly challenging because of the wide variety of noise sources and the non-stationarity of speech and noise. Digital signal processing (DSP) algorithms deployed in modern hearing aids for noise reduction rely on certain assumptions on the statistical properties of undesired signals. This could be disadvantageous in accurate estimation of different noise types, which subsequently leads to suboptimal noise reduction. In this research, a relatively unexplored technique based on deep learning, i.e. Recurrent Neural Network (RNN), is used to perform noise reduction and dereverberation for assisting hearing-impaired listeners. For noise reduction, the performance of the deep learning model was evaluated objectively and compared with that of open Master Hearing Aid (openMHA), a conventional signal processing based framework, and a Deep Neural Network (DNN) based model. It was found that the RNN model can suppress noise and improve speech understanding better than the conventional hearing aid noise reduction algorithm and the DNN model. The same RNN model was shown to reduce reverberation components with proper training. A real-time implementation of the deep learning model is also discussed

    Machine Learning Algorithms for Robotic Navigation and Perception and Embedded Implementation Techniques

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    The characterisation of multiple defects in components using artificial neural networks

    Get PDF
    This thesis investigates the use of artificial neural networks (ANNs) as a means of processing signals from non-destructive tests, to characterise defects and provide more information regarding the condition of the component than would otherwise be possible for an operator to obtain from the test data. ANNs are used both as pattern classifiers and as function approximators. In the first part of the thesis, finite element analysis was carried out on a simple component containing a single defect modelled as a void, simulating three kinds of non-destructive test: an impact method that sent a stress wave through the component, an analysis of natural frequencies, and an ultrasonic pulse-echo method. The inputs to the ANNs were data from the numerical model, and the outputs were the x and y co-ordinates of the defect in the case of the impact and frequency methods, and the size and distance to the defect in the case of the ultrasonic method. Very good accuracy was observed in all three methods. Experimental validation of the ultrasonic method was carried out, and the ANNs returned accurate outputs for the position and size of a circular hole in a steel plate when presented with experimental data. When the ANNs were presented with noisy input data, their reduction in accuracy was small in comparison with published data from similar studies. In the second part of the thesis, the case of two defects lying within one wavelength of each other was considered, where the reflected ultrasonic waves from each defect overlapped, partially cancelling each other out and reducing the overall amplitude. A novel ANN-based approach was developed to decouple the overlapping signals, characterising each defect in terms of its position and size. Optimisation of the ANN architecture was carried out to maximise the ability of the ANN to generalise when presented with previously unseen data. Finally, an ANN-based general defect characterisation ‘expert system’ is presented, using data from an ultrasonic test as its input, and classifying cases according to the number of defects present. The system then characterised the defects present in the component in terms of their location and size, providing more information regarding the component’s condition than would be possible by existing techniques

    Advanced Control of Piezoelectric Actuators.

    Get PDF
    168 p.A lo largo de las últimas décadas, la ingeniería de precisión ha tenido un papel importante como tecnología puntera donde la tendencia a la reducción de tamaño de las herramientas industriales ha sido clave. Los procesos industriales comenzaron a demandar precisión en el rango de nanómetros a micrómetros. Pese a que los actuadores convencionales no pueden reducirse lo suficiente ni lograr tal exactitud, los actuadores piezoeléctricos son una tecnología innovadora en este campo y su rendimiento aún está en estudio en la comunidad científica. Los actuadores piezoeléctricos se usan comúnmente en micro y nanomecatrónica para aplicaciones de posicionamiento debido a su alta resolución y fuerza de actuación (pueden llegar a soportar fuerzas de hasta 100 Newtons) en comparación con su tamaño. Todas estas características también se pueden combinar con una actuación rápida y rigidez, según los requisitos de la aplicación. Por lo tanto, con estas características, los actuadores piezoeléctricos pueden ser utilizados en una amplia variedad de aplicaciones industriales. Los efectos negativos, como la fluencia, vibraciones y la histéresis, se estudian comúnmente para mejorar el rendimiento cuando se requiere una alta precisión. Uno de los efectos que más reduce el rendimiento de los PEA es la histéresis. Esto se produce especialmente cuando el actuador está en una aplicación de guiado, por lo que la histéresis puede inducir errores que pueden alcanzar un valor de hasta 22%. Este fenómeno no lineal se puede definir como un efecto generado por la combinación de acciones mecánicas y eléctricas que depende de estados previos. La histéresis se puede reducir principalmente mediante dos estrategias: rediseño de materiales o algoritmos de control tipo feedback. El rediseño de material comprende varias desventajas por lo que el motivo principal de esta tesis está enfocado al diseño de algoritmos de control para reducir la histéresis. El objetivo principal de esta tesis es el desarrollo de estrategias de control avanzadas que puedan mejorar la precisión de seguimiento de los actuadores piezoeléctricos comerciale
    corecore