94 research outputs found

    Low-power neuromorphic sensor fusion for elderly care

    Get PDF
    Smart wearable systems have become a necessary part of our daily life with applications ranging from entertainment to healthcare. In the wearable healthcare domain, the development of wearable fall recognition bracelets based on embedded systems is getting considerable attention in the market. However, in embedded low-power scenarios, the sensor’s signal processing has propelled more challenges for the machine learning algorithm. Traditional machine learning method has a huge number of calculations on the data classification, and it is difficult to implement real-time signal processing in low-power embedded systems. In an embedded system, ensuring data classification in a low-power and real-time processing to fuse a variety of sensor signals is a huge challenge. This requires the introduction of neuromorphic computing with software and hardware co-design concept of the system. This thesis is aimed to review various neuromorphic computing algorithms, research hardware circuits feasibility, and then integrate captured sensor data to realise data classification applications. In addition, it has explored a human being benchmark dataset, which is following defined different levels to design the activities classification task. In this study, firstly the data classification algorithm is applied to human movement sensors to validate the neuromorphic computing on human activity recognition tasks. Secondly, a data fusion framework has been presented, it implements multiple-sensing signals to help neuromorphic computing achieve sensor fusion results and improve classification accuracy. Thirdly, an analog circuits module design to carry out a neural network algorithm to achieve low power and real-time processing hardware has been proposed. It shows a hardware/software co-design system to combine the above work. By adopting the multi-sensing signals on the embedded system, the designed software-based feature extraction method will help to fuse various sensors data as an input to help neuromorphic computing hardware. Finally, the results show that the classification accuracy of neuromorphic computing data fusion framework is higher than that of traditional machine learning and deep neural network, which can reach 98.9% accuracy. Moreover, this framework can flexibly combine acquisition hardware signals and is not limited to single sensor data, and can use multi-sensing information to help the algorithm obtain better stability

    Super-resolution:A comprehensive survey

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF

    A Defense of Pure Connectionism

    Full text link
    Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association

    Artificial neural network and its applications in quality process control, document recognition and biomedical imaging

    Get PDF
    In computer-vision based system a digital image obtained by a digital camera would usually have 24-bit color image. The analysis of an image with that many levels might require complicated image processing techniques and higher computational costs. But in real-time application, where a part has to be inspected within a few milliseconds, either we have to reduce the image to a more manageable number of gray levels, usually two levels (binary image), and at the same time retain all necessary features of the original image or develop a complicated technique. A binary image can be obtained by thresholding the original image into two levels. Therefore, thresholding of a given image into binary image is a necessary step for most image analysis and recognition techniques. In this thesis, we have studied the effectiveness of using artificial neural network (ANN) in pharmaceutical, document recognition and biomedical imaging applications for image thresholding and classification purposes. Finally, we have developed edge-based, ANN-based and region-growing based image thresholding techniques to extract low contrast objects of interest and classify them into respective classes in those applications. Real-time quality inspection of gelatin capsules in pharmaceutical applications is an important issue from the point of view of industry\u27s productivity and competitiveness. Computer vision-based automatic quality inspection and controller system is one of the solutions to this problem. Machine vision systems provide quality control and real-time feedback for industrial processes, overcoming physical limitations and subjective judgment of humans. In this thesis, we have developed an image processing system using edge-based image thresholding techniques for quality inspection that satisfy the industrial requirements in pharmaceutical applications to pass the accepted and rejected capsules. In document recognition application, success of OCR mostly depends on the quality of the thresholded image. Non-uniform illumination, low contrast and complex background make it challenging in this application. In this thesis, optimal parameters for ANN-based local thresholding approach for gray scale composite document image with non-uniform background is proposed. An exhaustive search was conducted to select the optimal features and found that pixel value, mean and entropy are the most significant features at window size 3x3 in this application. For other applications, it might be different, but the procedure to find the optimal parameters is same. The average recognition rate 99.25% shows that the proposed 3 features at window size 3x3 are optimal in terms of recognition rate and PSNR compare to the ANN-based thresholding technique with different parameters presented in the literature. In biomedical imaging application, breast cancer continues to be a public health problem. In this thesis we presented a computer aided diagnosis (CAD) system for mass detection and classification in digitized mammograms, which performs mass detection on regions of interest (ROI) followed by the benign-malignant classification on detected masses. Three layers ANN with seven features is proposed for classifying the marked regions into benign and malignant and 90.91% sensitivity and 83.87% specificity is achieved that is very much promising compare to the radiologist\u27s sensitivity 75%

    Intelligent X-ray imaging inspection system for the food industry.

    Get PDF
    The inspection process of a product is an important stage of a modern production factory. This research presents a generic X-ray imaging inspection system with application for the detection of foreign bodies in a meat product for the food industry. The most important modules in the system are the image processing module and the high-level detection system. This research discusses the use of neural networks for image processing and fuzzy-logic for the detection of potential foreign bodies found in x-ray images of chicken breast meat after the de-boning process. The meat product is passed under a solid-state x-ray sensor that acquires a dual-band two-dimensional image of the meat (a low- and a high energy image). A series of image processing operations are applied to the acquired image (pre-processing, noise removal, contrast enhancement). The most important step of the image processing is the segmentation of the image into meaningful objects. The segmentation task is a difficult one due to the lack of clarity of the acquired X-ray images and the resulting segmented image represents not only correctly identified foreign bodies but also areas caused by overlapping muscle regions in the meat which appear very similar to foreign bodies in the resulting x-ray image. A Hopfield neural network architecture was proposed for the segmentation of a X-ray dual-band image. A number of image processing measurements were made on each object (geometrical and grey-level based statistical features) and these features were used as the input into a fuzzy logic based high-level detection system whose function was to differentiate between bones and non-bone segmented regions. The results show that system's performance is considerably improved over non-fuzzy or crisp methods. Possible noise affecting the system is also investigated. The proposed system proved to be robust and flexible while achieving a high level of performance. Furthermore, it is possible to use the same approach when analysing images from other applications areas from the automotive industry to medicine

    Intelligent X-ray imaging inspection system for the food industry.

    Get PDF
    The inspection process of a product is an important stage of a modern production factory. This research presents a generic X-ray imaging inspection system with application for the detection of foreign bodies in a meat product for the food industry. The most important modules in the system are the image processing module and the high-level detection system. This research discusses the use of neural networks for image processing and fuzzy-logic for the detection of potential foreign bodies found in x-ray images of chicken breast meat after the de-boning process. The meat product is passed under a solid-state x-ray sensor that acquires a dual-band two-dimensional image of the meat (a low- and a high energy image). A series of image processing operations are applied to the acquired image (pre-processing, noise removal, contrast enhancement). The most important step of the image processing is the segmentation of the image into meaningful objects. The segmentation task is a difficult one due to the lack of clarity of the acquired X-ray images and the resulting segmented image represents not only correctly identified foreign bodies but also areas caused by overlapping muscle regions in the meat which appear very similar to foreign bodies in the resulting x-ray image. A Hopfield neural network architecture was proposed for the segmentation of a X-ray dual-band image. A number of image processing measurements were made on each object (geometrical and grey-level based statistical features) and these features were used as the input into a fuzzy logic based high-level detection system whose function was to differentiate between bones and non-bone segmented regions. The results show that system's performance is considerably improved over non-fuzzy or crisp methods. Possible noise affecting the system is also investigated. The proposed system proved to be robust and flexible while achieving a high level of performance. Furthermore, it is possible to use the same approach when analysing images from other applications areas from the automotive industry to medicine
    corecore