335 research outputs found

    ChatGPT in the context of precision agriculture data analytics

    Full text link
    In this study we argue that integrating ChatGPT into the data processing pipeline of automated sensors in precision agriculture has the potential to bring several benefits and enhance various aspects of modern farming practices. Policy makers often face a barrier when they need to get informed about the situation in vast agricultural fields to reach to decisions. They depend on the close collaboration between agricultural experts in the field, data analysts, and technology providers to create interdisciplinary teams that cannot always be secured on demand or establish effective communication across these diverse domains to respond in real-time. In this work we argue that the speech recognition input modality of ChatGPT provides a more intuitive and natural way for policy makers to interact with the database of the server of an agricultural data processing system to which a large, dispersed network of automated insect traps and sensors probes reports. The large language models map the speech input to text, allowing the user to form its own version of unconstrained verbal query, raising the barrier of having to learn and adapt oneself to a specific data analytics software. The output of the language model can interact through Python code and Pandas with the entire database, visualize the results and use speech synthesis to engage the user in an iterative and refining discussion related to the data. We show three ways of how ChatGPT can interact with the database of the remote server to which a dispersed network of different modalities (optical counters, vibration recordings, pictures, and video), report. We examine the potential and the validity of the response of ChatGPT in analyzing, and interpreting agricultural data, providing real time insights and recommendations to stakeholdersComment: 33 pages, 21 figure

    A Concept Drift-Aware DAG-Based Classification Scheme for Acoustic Monitoring of Farms

    Get PDF
    Intelligent farming as part of the green revolution is advancing the world of agriculture in such a way that farms become dynamic, with the overall scope being the optimization of animal production in an eco-friendly way. In this direction, this study proposes exploiting the acoustic modality for farm monitoring. Such information could be used in a stand-alone or complimentary mode to monitor the farm constantly at a great level of detail. To this end, the authors designed a scheme classifying the vocalizations produced by farm animals. More precisely, a directed acyclic graph was proposed, where each node carries out a binary classification task using hidden Markov models. The topological ordering follows a criterion derived from the Kullback-Leibler divergence. In addition, a transfer learning-based module for handling concept drifts was proposed. During the experimental phase, the authors employed a publicly available dataset including vocalizations of seven animals typically encountered in farms, where promising recognition rates were reported

    Automated detection and monitoring of grain beetles using a “smart” pitfall trap: Poster

    Get PDF
    A smart, electronic, modified pitfall trap, for automatic detection of adult beetle pests inside the grain mass is presented. The whole system is equipped with optoelectronic sensors to guard the entrance of the trap in order to detect, time-stamp, and GPS tag the incoming insect. Insect counts as well as environmental parameters that correlate with insect’s population development are wirelessly transmitted to a central monitoring agency in real time, are visualized and streamed to statistical methods to assist effective control of grain pests. The prototype trap was put in a large plastic barrel (120lt) with 80kg maize. Adult beetles of various species were collected from laboratory rearings and transferred to the experimental barrel. Caught beetle adults were checked and counted after 24h and were compared with the counts from the electronic system. Results from the evaluation procedure showed that our system is very accurate, reaching 98-99% accuracy on automatic counts compared with real detected numbers of adult beetles inside the trap. In this work we emphasize on how the traps can be selforganized in networks that collectively report data at local, regional, country, continental, and global scales using the emerging technology of the Internet of Things (IoT). We argue that smart traps communicating through IoT to report in real-time the level of the pest population from the grain mass straight to a human controlled agency can, in the very near future, have a profound impact on the decision making process in stored grain protection.A smart, electronic, modified pitfall trap, for automatic detection of adult beetle pests inside the grain mass is presented. The whole system is equipped with optoelectronic sensors to guard the entrance of the trap in order to detect, time-stamp, and GPS tag the incoming insect. Insect counts as well as environmental parameters that correlate with insect’s population development are wirelessly transmitted to a central monitoring agency in real time, are visualized and streamed to statistical methods to assist effective control of grain pests. The prototype trap was put in a large plastic barrel (120lt) with 80kg maize. Adult beetles of various species were collected from laboratory rearings and transferred to the experimental barrel. Caught beetle adults were checked and counted after 24h and were compared with the counts from the electronic system. Results from the evaluation procedure showed that our system is very accurate, reaching 98-99% accuracy on automatic counts compared with real detected numbers of adult beetles inside the trap. In this work we emphasize on how the traps can be selforganized in networks that collectively report data at local, regional, country, continental, and global scales using the emerging technology of the Internet of Things (IoT). We argue that smart traps communicating through IoT to report in real-time the level of the pest population from the grain mass straight to a human controlled agency can, in the very near future, have a profound impact on the decision making process in stored grain protection

    Two-Dimensional Convolutional Recurrent Neural Networks for Speech Activity Detection

    Get PDF
    Speech Activity Detection (SAD) plays an important role in mobile communications and automatic speech recognition (ASR). Developing efficient SAD systems for real-world applications is a challenging task due to the presence of noise. We propose a new approach to SAD where we treat it as a two-dimensional multilabel image classification problem. To classify the audio segments, we compute their Short-time Fourier Transform spectrograms and classify them with a Convolutional Recurrent Neural Network (CRNN), traditionally used in image recognition. Our CRNN uses a sigmoid activation function, max-pooling in the frequency domain, and a convolutional operation as a moving average filter to remove misclassified spikes. On the development set of Task 1 of the 2019 Fearless Steps Challenge, our system achieved a decision cost function (DCF) of 2.89%, a 66.4% improvement over the baseline. Moreover, it achieved a DCF score of 3.318% on the evaluation dataset of the challenge, ranking first among all submissions

    Resolving the identification of weak-flying insects during flight: a coupling between rigorous data processing and biology

    Get PDF
    1. Bioacoustic methods play an increasingly important role for the detection of insects in a range of surveillance and monitoring programs. 2. Weak-flying insects evade detection because they do not yield sufficient audio information to capture wingbeat and harmonic frequencies. These inaudible insects often pose a significant threat to food security as pests of key agricultural crops worldwide. 3. Automatic detection of such insects is crucial to the future of crop protection by providing critical information to assess the risk to a crop and the need for preventative measures. 4. We describe an experimental setup designed to derive audio recordings from a range of weak-flying aphids and beetles using an LED array. 5. A rigorous data processing pipeline was developed to extract meaningful features, linked to morphological characteristics, from the audio and harmonic series for six aphid and two beetle species. 6. An ensemble of over 50 bioacoustic parameters was used to achieve species discrimination with a success rate of 80%. The inclusion of the dominant and fundamental frequencies improved prediction between beetles and aphids due to large differences in wingbeat frequencies. 7. At the species level, error rates were minimised when harmonic features were supplemented by features indicative of differences in species’ flight energies

    Transfer Learning for Improved Audio-Based Human Activity Recognition

    Get PDF
    Human activities are accompanied by characteristic sound events, the processing of which might provide valuable information for automated human activity recognition. This paper presents a novel approach addressing the case where one or more human activities are associated with limited audio data, resulting in a potentially highly imbalanced dataset. Data augmentation is based on transfer learning; more specifically, the proposed method: (a) identifies the classes which are statistically close to the ones associated with limited data; (b) learns a multiple input, multiple output transformation; and (c) transforms the data of the closest classes so that it can be used for modeling the ones associated with limited data. Furthermore, the proposed framework includes a feature set extracted out of signal representations of diverse domains, i.e., temporal, spectral, and wavelet. Extensive experiments demonstrate the relevance of the proposed data augmentation approach under a variety of generative recognition schemes

    Synthesis and anticancer activity of novel 3,6-disubstituted 1,2,4-triazolo-[3,4-b]-1,3,4-thiadiazole derivatives

    Get PDF
    AbstractThe development of new antitumor agents is one of the most pressing research areas in medicinal chemistry and medicine. The importance of triazole and thiadiazole rings as scaffolds present in a wide range of therapeutic agents has been well reported and has driven the synthesis of a large number of novel antitumor agents. The presence of these heterocycles furnishes extensive synthetic possibilities due to the presence of several reaction sites. Prompted by these data we designed, synthesized and evaluated a series of novel 3,6-disubstituted 1,2,4-triazolo-[3,4-b]-1,3,4-thiadiazole derivatives as potential anticancer agents. We emphasized in the strategy of combining two chemically different but pharmacologically compatible molecules (the 1,2,4-triazole and 1,3,4 thiadiazole) in one frame. Several of the newly synthesized 1,2,4-triazolo-[3,4-b]-1,3,4-thiadiazole derivatives showed substantial cytostatic and cytotoxic antineoplastic activity invitro, while they have produced relatively low acute toxicities invivo, giving potentially high therapeutic ratios. Insilico screening has revealed several protein targets including apoptotic protease-activating factor 1 (APAF1) and tyrosine-protein kinase HCK which may be involved in the biological activities of active analogues

    Classifying Flies Based on Reconstructed Audio Signals

    Get PDF
    Advancements in sensor technology and processing power have made it possible to create recording equipment that can reconstruct the audio signal of insects passing through a directed infrared beam. The widespread deployment of such devices would allow for a range of applications previously not practical. A sensor net of detectors could be used to help model population dynamics, assess the efficiency of interventions and serve as an early warning system. At the core of any such system is a classification problem: given a segment of audio collected as something passes through a sensor, can we classify it? We examine the case of detecting the presence of fly species, with a particular focus on mosquitoes. This gives rise to a range of problems such as: can we discriminate between species of fly? Can we detect different species of mosquito? Can we detect the sex of the insect? Automated classification would significantly improve the effectiveness and efficiency of vector monitoring using these sensor nets. We assess a range of time series classification (TSC) algorithms on data from two projects working in this area. We assess our prior belief that spectral features are most effective, and we remark on all approaches with respect to whether they can be considered ``real-time''
    corecore