4 research outputs found
Recommended from our members
A review of applications of artificial intelligence in veterinary medicine
Artificial intelligence is a newer concept in veterinary medicine than human medicine, but its existing benefits illustrate the significant potential it may also have in this field. This article reviews the application of artificial intelligence to various fields of veterinary medicine. Successful integration of different artificial intelligence strategies can offer practical solutions to issues, such as time pressure, in practice. Several databases were searched to identify literature on the application of artificial intelligence in veterinary medicine. Exclusion and inclusion criteria were applied to obtain relevant papers. There was evidence for an acceleration of artificial intelligence research in recent years, particularly for diagnostics and imaging. Some of the benefits of using artificial intelligence included standardisation, increased efficiency, and a reduction in the need for expertise in particular fields. However, limitations identified in the literature included a requirement for ideal situations for artificial intelligence to achieve accuracy and other inherent, unresolved issues. Ethical considerations and a hesitancy to engage with artificial intelligence, by both the public and veterinarians, are further barriers that must be addressed for artificial intelligence to be fully integrated in daily practice. The rapid growth in artificial intelligence research substantiates its potential to improve veterinary practice
Automatic Classification of Cat Vocalizations Emitted in Different Contexts
Cats employ vocalizations for communicating information, thus their sounds can carry a wide range of meanings. Concerning vocalization, an aspect of increasing relevance directly connected with the welfare of such animals is its emotional interpretation and the recognition of the production context.
To this end, this work presents a proof of concept facilitating the automatic analysis of cat vocalizations based on signal processing and pattern recognition techniques, aimed at demonstrating if the emission context can be identified by meowing vocalizations, even if recorded in sub-optimal conditions. We
rely on a dataset including vocalizations of Maine Coon and European Shorthair breeds emitted in three different contexts: waiting for food, isolation in unfamiliar environment, and brushing. Towards capturing the emission context, we extract two sets of acoustic parameters, i.e., mel-frequency cepstral coefficients and
temporal modulation features. Subsequently, these are modeled using a classification scheme based on a directed acyclic graph dividing the problem space. The experiments we conducted demonstrate the superiority of such a scheme over a series of generative and discriminative classification solutions. These results open up new perspectives for deepening our knowledge of acoustic communication between humans and cats and, in general, between humans and animals
Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data
The use of IoT (Internet of Things) technology for the management of pet dogs left alone at home is increasing. This includes tasks such as automatic feeding, operation of play equipment, and location detection. Classification of the vocalizations of pet dogs using information from a sound sensor is an important method to analyze the behavior or emotions of dogs that are left alone. These sounds should be acquired by attaching the IoT sound sensor to the dog, and then classifying the sound events (e.g., barking, growling, howling, and whining). However, sound sensors tend to transmit large amounts of data and consume considerable amounts of power, which presents issues in the case of resource-constrained IoT sensor devices. In this paper, we propose a way to classify pet dog sound events and improve resource efficiency without significant degradation of accuracy. To achieve this, we only acquire the intensity data of sounds by using a relatively resource-efficient noise sensor. This presents issues as well, since it is difficult to achieve sufficient classification accuracy using only intensity data due to the loss of information from the sound events. To address this problem and avoid significant degradation of classification accuracy, we apply long short-term memory-fully convolutional network (LSTM-FCN), which is a deep learning method, to analyze time-series data, and exploit bicubic interpolation. Based on experimental results, the proposed method based on noise sensors (i.e., Shapelet and LSTM-FCN for time-series) was found to improve energy efficiency by 10 times without significant degradation of accuracy compared to typical methods based on sound sensors (i.e., mel-frequency cepstrum coefficient (MFCC), spectrogram, and mel-spectrum for feature extraction, and support vector machine (SVM) and k-nearest neighbor (K-NN) for classification)