65 research outputs found
On the calculation of the linear complexity of periodic sequences
Based on a result of Hao Chen in 2006 we present a general procedure how to reduce the determination of the linear complexity of a sequence over a finite field \F_q of period to the determination of the linear complexities of sequences over \F_q of period . We apply this procedure to some classes of
periodic sequences over a finite field \F_q obtaining efficient algorithms to determine the linear complexity
Sign Language Recognition Using Artificial Rabbits Optimizer with Siamese Neural Network for Persons with Disabilities
Sign language recognition is an effective solution for individuals with disabilities to communicate with others. It helps to convey information using sign language. Recent advances in computer vision (CV) and image processing algorithms can be employed for effective sign detection and classification. As hyperparameters involved in Deep Learning (DL) algorithms considerably affect the classification results, metaheuristic optimization algorithms can be designed. In this aspect, this manuscript offers the design of Sign Language Recognition using Artificial Rabbits Optimizer with Siamese Neural Network (SLR-AROSNN) technique for persons with disabilities. The proposed SLR-AROSNN technique mainly focused on the recognition of multiple kinds of sign languages posed by disabled persons. The goal of the SLR-AROSNN technique lies in the effectual exploitation of CV, DL, and parameter tuning strategies. It employs the MobileNet model to derive feature vectors. For the identification and classification of sign languages, Siamese neural network is used. At the final stage, the SLR-AROSNN technique makes use of the ARO algorithm to get improved sign recognition results. To illustrate the improvement of the SLR-AROSNN technique, a series of experimental validations are involved. The attained outcomes reported the supremacy of the SLR-AROSNN technique in the sign recognition process
Exploiting Deep Learning Based Automated Fire-detection Model for Blind and Visually Challenged People
An increasing number of elderly people suffer from high levels of vision and cognitive impairments, frequently resulting in loss of independence. Initially, fire recognition and notification approaches offer fire prevention and security data to blind and visually impaired (BVI) persons for a short duration under emergency conditions if the fires take place in indoor surroundings. To provide direct control of human protection and surroundings, fire detection is a complex but serious problem. In order to avoid injuries and physical damage, latest technologies need suitable approaches for identifying fires as soon as possible. This study exploits the sine cosine algorithm with deep learning model for automated fire-detection (SCADL-AFD) system to aid blind and visually challenged people. To accomplish this, the SCADL-AFD technique focuses on the examination of input images for the recognition of possible fire situations. Primarily, the SCADL-AFD technique investigates the input images using the EfficientNet model to produce feature vectors. For fire-recognition purposes, the SCADL-AFD technique applies the gated recurrent unit (GRU) model. Finally, the SCA is utilized as a hyperparameter tuning strategy for the GRU model. The simulation outcome of the SCADL-AFD system is validated on the benchmark fire image database and the outcomes indicate the supremacy of the SCADL-AFD system with respect to various measures
Design of Information Feedback Firefly Algorithm with a Nested Deep Learning Model for Intelligent Gesture Recognition of Visually Disabled People
Gesture recognition is a developing topic in current technologies. The focus is to detect human gestures by utilizing mathematical methods for human–computer interaction. Some modes of human–computer interaction are touch screens, keyboard, mouse, etc. All these gadgets have their merits and demerits while implementing versatile hardware in computers. Gesture detection is one of the vital methods to construct user-friendly interfaces. Generally, gestures are created from any bodily state or motion but typically originate from the hand or face. Therefore, this manuscript designs an Information Feedback Firefly Algorithm with Nested Deep Learning (IFBFFA-NDL) model for intelligent gesture recognition of visually disabled people. The presented IFBFFA-NDL technique exploits the concepts of DL with a metaheuristic hyperparameter tuning strategy for the recognition process. To generate a collection of feature vectors, the IFBFFA-NDL technique uses the NASNet model. For optimal hyperparameter selection of the NASNet model, the IFBFFA algorithm is used. To recognize different types of gestures, a nested long short-term memory classification model was used. For exhibiting the improvised gesture detection efficiency of the IFBFFA-NDL technique, a detailed comparative result analysis was conducted and the outcomes highlighted the improved recognition rate of the IFBFFA-NDL technique as 99.73% compared to recent approaches
Automated Gesture-Recognition Solutions using Optimal Deep Belief Network for Visually Challenged People
Gestures are a vital part of our communication. It is a procedure of nonverbal conversation of data which stimulates great concerns regarding the offer of human–computer interaction methods, while permitting users to express themselves intuitively and naturally in various contexts. In most contexts, hand gestures play a vital role in the domain of assistive technologies for visually impaired people (VIP), but an optimum user interaction design is of great significance. The existing studies on the assisting of VIP mostly concentrate on resolving a single task (like reading text or identifying obstacles), thus making the user switch applications for performing other actions. Therefore, this research presents an interactive gesture technique using sand piper optimization with the deep belief network (IGSPO-DBN) technique. The purpose of the IGSPO-DBN technique enables people to handle the devices and exploit different assistance models by the use of different gestures. The IGSPO-DBN technique detects the gestures and classifies them into several kinds using the DBN model. To boost the overall gesture-recognition rate, the IGSPO-DBN technique exploits the SPO algorithm as a hyperparameter optimizer. The simulation outcome of the IGSPO-DBN approach was tested on gesture-recognition dataset and the outcomes showed the improvement of the IGSPO-DBN algorithm over other systems
Computer Vision with Optimal Deep Stacked Autoencoder-based Fall Activity Recognition for Disabled Persons in the IoT Environment
Remote monitoring of fall conditions or actions and the daily life of disabled victims is one of the indispensable purposes of contemporary telemedicine. Artificial intelligence and Internet of Things (IoT) techniques that include deep learning and machine learning methods are now implemented in the field of medicine for automating the detection process of diseased and abnormal cases. Many other applications exist that include the real-time detection of fall accidents in older patients. Owing to the articulated nature of human motion, it is unimportant to find human action with a higher level of accuracy for every application. Likewise, finding human activity is required to automate a system to monitor and find suspicious activities while executing surveillance. In this study, a new Computer Vision with Optimal Deep Stacked Autoencoder Fall Activity Recognition (CVDSAE-FAR) for disabled persons is designed. The presented CVDSAE-FAR technique aims to determine the occurrence of fall activity among disabled persons in the IoT environment. In this work, the densely connected networks model can be exploited for feature extraction purposes. Besides, the DSAE model receives the feature vectors and classifies the activities effectually. Lastly, the fruitfly optimization method can be used for the automated parameter tuning of the DSAE method which leads to enhanced recognition performance. The simulation result analysis of the CVDSAE-FAR approach is tested on a benchmark dataset. The extensive experimental results emphasized the supremacy of the CVDSAE-FAR method compared to recent approaches
Chameleon Swarm Algorithm with Improved Fuzzy Deep Learning for Fall Detection Approach to Aid Elderly People
Over the last few decades, the processes of mobile communications and the Internet of Things (IoT) have been established to collect human and environmental data for a variety of smart applications and services. Remote monitoring of disabled and elderly persons living in smart homes was most difficult because of possible accidents which can take place due to day-to-day work like falls. Fall signifies a major health problem for elderly people. When the condition is not alerted in time, then this causes death or impairment in the elderly which decreases the quality of life. For elderly persons, falls can be assumed to be the main cause for the demise of posttraumatic complications. Therefore, early detection of elderly persons’ falls in smart homes is required for increasing their survival chances or offering vital support. Therefore, the study presents a Chameleon Swarm Algorithm with Improved Fuzzy Deep Learning for Fall Detection (CSA-IDFLFD) technique. The CSA-IDFLFD technique helps elderly persons with the identification of fall actions and improves their quality of life. The CSA-IDFLFD technique involves two phases of operations. In the initial phase, the CSA-IDFLFD technique involves the design of the IDFL model for the identification and classification of fall events. Next, in the second phase, the parameters related to the IDFL method can be optimally selected by the design of CSA. To validate the performance of the CSA-IDFLFD technique in the fall detection (FD) process, a widespread experimental evaluation process takes place. The extensive outcome stated the improved detection results of the CSA-IDFLFD technique
Artificial Rabbit Optimizer with deep learning for fall detection of disabled people in the IoT Environment
Fall detection (FD) for disabled persons in the Internet of Things (IoT) platform contains a combination of sensor technologies and data analytics for automatically identifying and responding to samples of falls. In this regard, IoT devices like wearable sensors or ambient sensors from the personal space role a vital play in always monitoring the user's movements. FD employs deep learning (DL) in an IoT platform using sensors, namely accelerometers or depth cameras, to capture data connected to human movements. DL approaches are frequently recurrent neural networks (RNNs) or convolutional neural networks (CNNs) that have been trained on various databases for recognizing patterns connected with falls. The trained methods are then executed on edge devices or cloud environments for real-time investigation of incoming sensor data. This method differentiates normal activities and potential falls, triggering alerts and reports to caregivers or emergency numbers once a fall is identified. We designed an Artificial Rabbit Optimizer with a DL-based FD and classification (ARODL-FDC) system from the IoT environment. The ARODL-FDC approach proposes to detect and categorize fall events to assist elderly people and disabled people. The ARODL-FDC technique comprises a four-stage process. Initially, the preprocessing of input data is performed by Gaussian filtering (GF). The ARODL-FDC technique applies the residual network (ResNet) model for feature extraction purposes. Besides, the ARO algorithm has been utilized for better hyperparameter choice of the ResNet algorithm. At the final stage, the full Elman Neural Network (FENN) model has been utilized for the classification and recognition of fall events. The experimental results of the ARODL-FDC technique can be tested on the fall dataset. The simulation results inferred that the ARODL-FDC technique reaches promising performance over compared models concerning various measures
- …
