19 research outputs found

    Generalized Fractional Operators on Time Scales with Application to Dynamic Equations

    Full text link
    We introduce more general concepts of Riemann-Liouville fractional integral and derivative on time scales, of a function with respect to another function. Sufficient conditions for existence and uniqueness of solution to an initial value problem described by generalized fractional order differential equations on time scales are proved.Comment: This is a preprint whose final and definite form is with 'The European Physical Journal Special Topics' (EPJ ST), ISSN 1951-6355 (Print), ISSN 1951-6401 (Online), available at [https://link.springer.com/journal/11734]. Submitted 20-June-2017; revised 26-Oct-2017; accepted for publication 06-April-2018. arXiv admin note: text overlap with arXiv:1508.00754; text overlap with arXiv:nlin/0507061 by other author

    Early identification of root rot disease by using hyperspectral reflectance: the case of pathosystem grapevine/Armillaria

    Get PDF
    Armillaria genus represents one of the most common causes of chronic root rot disease in woody plants. Prompt recognition of diseased plants is crucial to control the pathogen. However, the current disease detection methods are limited at a field scale. Therefore, an alternative approach is needed. In this study, we investigated the potential of hyperspectral techniques to identify fungi-infected vs. healthy plants of Vitis vinifera. We used the hyperspectral imaging sensor Specim-IQ to acquire leaves’ reflectance data of the Teroldego Rotaliano grapevine cultivar. We analyzed three different groups of plants: healthy, asymptomatic, and diseased. Highly significant differences were found in the near-infrared (NIR) spectral region with a decreasing pattern from healthy to diseased plants attributable to the leaf mesophyll changes. Asymptomatic plants emerged from the other groups due to a lower reflectance in the red edge spectrum (around 705 nm), ascribable to an accumulation of secondary metabolites involved in plant defense strategies. Further significant differences were observed in the wavelengths close to 550 nm in diseased vs. asymptomatic plants. We evaluated several machine learning paradigms to differentiate the plant groups. The Naïve Bayes (NB) algorithm, combined with the most discriminant variables among vegetation indices and spectral narrow bands, provided the best results with an overall accuracy of 90% and 75% in healthy vs. diseased and healthy vs. asymptomatic plants, respectively. To our knowledge, this study represents the first report on the possibility of using hyperspectral data for root rot disease diagnosis in woody plants. Although further validation studies are required, it appears that the spectral reflectance technique, possibly implemented on unmanned aerial vehicles (UAVs), could be a promising tool for a cost-effective, non-invasive method of Armillaria disease diagnosis and mapping in-field, contributing to a significant step forward in precision viticultur

    Time-Fractional Optimal Control of Initial Value Problems on Time Scales

    Full text link
    We investigate Optimal Control Problems (OCP) for fractional systems involving fractional-time derivatives on time scales. The fractional-time derivatives and integrals are considered, on time scales, in the Riemann--Liouville sense. By using the Banach fixed point theorem, sufficient conditions for existence and uniqueness of solution to initial value problems described by fractional order differential equations on time scales are known. Here we consider a fractional OCP with a performance index given as a delta-integral function of both state and control variables, with time evolving on an arbitrarily given time scale. Interpreting the Euler--Lagrange first order optimality condition with an adjoint problem, defined by means of right Riemann--Liouville fractional delta derivatives, we obtain an optimality system for the considered fractional OCP. For that, we first prove new fractional integration by parts formulas on time scales.Comment: This is a preprint of a paper accepted for publication as a book chapter with Springer International Publishing AG. Submitted 23/Jan/2019; revised 27-March-2019; accepted 12-April-2019. arXiv admin note: substantial text overlap with arXiv:1508.0075

    Deep Unsupervised Embedding for Remote Sensing Image Retrieval Using Textual Cues

    Get PDF
    Compared to image-image retrieval, text-image retrieval has been less investigated in the remote sensing community, possibly because of the complexity of appropriately tying textual data to respective visual representations. Moreover, a single image may be described via multiple sentences according to the perception of the human labeler and the structure/body of the language they use, which magnifies the complexity even further. In this paper, we propose an unsupervised method for text-image retrieval in remote sensing imagery. In the method, image representation is obtained via visual Big Transfer (BiT) Models, while textual descriptions are encoded via a bidirectional Long Short-Term Memory (Bi-LSTM) network. The training of the proposed retrieval architecture is optimized using an unsupervised embedding loss, which aims to make the features of an image closest to its corresponding textual description and different from other image features and vise-versa. To demonstrate the performance of the proposed architecture, experiments are performed on two datasets, obtaining plausible text/image retrieval outcomes

    Deep Unsupervised Embedding for Remote Sensing Image Retrieval Using Textual Cues

    No full text
    Compared to image-image retrieval, text-image retrieval has been less investigated in the remote sensing community, possibly because of the complexity of appropriately tying textual data to respective visual representations. Moreover, a single image may be described via multiple sentences according to the perception of the human labeler and the structure/body of the language they use, which magnifies the complexity even further. In this paper, we propose an unsupervised method for text-image retrieval in remote sensing imagery. In the method, image representation is obtained via visual Big Transfer (BiT) Models, while textual descriptions are encoded via a bidirectional Long Short-Term Memory (Bi-LSTM) network. The training of the proposed retrieval architecture is optimized using an unsupervised embedding loss, which aims to make the features of an image closest to its corresponding textual description and different from other image features and vise-versa. To demonstrate the performance of the proposed architecture, experiments are performed on two datasets, obtaining plausible text/image retrieval outcomes

    Learning a multi-branch neural network from multiple sources for knowledge adaptation in remote sensing imagery

    Get PDF
    In this paper we propose a multi-branch neural network, called MB-Net, for solving the problem of knowledge adaptation from multiple remote sensing scene datasets acquired with different sensors over diverse locations and manually labeled with different experts. Our aim is to learn invariant feature representations from multiple source domains with labeled images and one target domain with unlabeled images. To this end, we define for MB-Net an objective function that mitigates the multiple domain shifts at both feature representation and decision levels, while retaining the ability to discriminate between different land-cover classes. The complete architecture is trainable end-to-end via the backpropagation algorithm. In the experiments, we demonstrate the effectiveness of the proposed method on a new multiple domain dataset created from four heterogonous scene datasets well known to the remote sensing community, namely, the University of California (UC-Merced) dataset, the Aerial Image dataset (AID), the PatternNet dataset, and the Northwestern Polytechnical University (NWPU) dataset. In particular, this method boosts the average accuracy over all transfer scenarios up to 89.05 compared to standard architecture based only on cross-entropy loss, which yields an average accuracy of 78.53

    Contrasting Dual Transformer Architectures for Multi-Modal Remote Sensing Image Retrieval

    No full text
    Remote sensing technology has advanced rapidly in recent years. Because of the deployment of quantitative and qualitative sensors, as well as the evolution of powerful hardware and software platforms, it powers a wide range of civilian and military applications. This in turn leads to the availability of large data volumes suitable for a broad range of applications such as monitoring climate change. Yet, processing, retrieving, and mining large data are challenging. Usually, content-based remote sensing image (RS) retrieval approaches rely on a query image to retrieve relevant images from the dataset. To increase the flexibility of the retrieval experience, cross-modal representations based on text–image pairs are gaining popularity. Indeed, combining text and image domains is regarded as one of the next frontiers in RS image retrieval. Yet, aligning text to the content of RS images is particularly challenging due to the visual-sematic discrepancy between language and vision worlds. In this work, we propose different architectures based on vision and language transformers for text-to-image and image-to-text retrieval. Extensive experimental results on four different datasets, namely TextRS, Merced, Sydney, and RSICD datasets are reported and discussed

    TextRS: Deep Bidirectional Triplet Network for Matching Text to Remote Sensing Images

    Get PDF
    Exploring the relevance between images and their respective natural language descriptions, due to its paramount importance, is regarded as the next frontier in the general computer vision literature. Thus, recently several works have attempted to map visual attributes onto their corresponding textual tenor with certain success. However, this line of research has not been widespread in the remote sensing community. On this point, our contribution is three-pronged. First, we construct a new dataset for text-image matching tasks, termed TextRS, by collecting images from four well-known different scene datasets, namely AID, Merced, PatternNet, and NWPU datasets. Each image is annotated by five different sentences. All the five sentences were allocated by five people to evidence the diversity. Second, we put forth a novel Deep Bidirectional Triplet Network (DBTN) for text to image matching. Unlike traditional remote sensing image-to-image retrieval, our paradigm seeks to carry out the retrieval by matching text to image representations. To achieve that, we propose to learn a bidirectional triplet network, which is composed of Long Short Term Memory network (LSTM) and pre-trained Convolutional Neural Networks (CNNs) based on (EfficientNet-B2, ResNet-50, Inception-v3, and VGG16). Third, we top the proposed architecture with an average fusion strategy to fuse the features pertaining to the five image sentences, which enables learning of more robust embedding. The performances of the method expressed in terms Recall@K representing the presence of the relevant image among the top K retrieved images to the query text shows promising results as it yields 17.20%, 51.39%, and 73.02% for K = 1, 5, and 10, respectively

    Recovering the Sight to blind People in indoor Environments with smart Technologies

    No full text
    Assistive technologies for blind people are showing a fast growth, providing useful tools to support daily activities and to improve social inclusion. Most of these technologies are mainly focused on helping blind people to navigate and avoid obstacles. Other works emphasize on providing them assistance to recognize their surrounding objects. Very few of them however couple both aspects (i.e., navigation and recognition). With the aim to address the aforesaid needs, we describe in this paper an innovative prototype, which offers the capabilities to (i) move autonomously and to (ii) recognize multiple objects in public indoor environments. It incorporates lightweight hardware components (camera, IMU, and laser sensors), all mounted on a reasonably-sized integrated device to be placed on the chest. It requires the indoor environment to be ‘blind-friendly’, i.e., prior information about it should be prepared and loaded in the system beforehand. Its algorithms are mainly based on advanced computer vision and machine learning approaches. The interaction between the user and the system is performed through speech recognition and synthesis modules. The prototype offers to the user the possibility to (i) walk across the site to reach the desired destination, avoiding static and mobile obstacles, and (ii) ask the system through vocal interaction to list the prominent objects in the user's field of view. We illustrate the performances of the proposed prototype through experiments conducted in a blind-friendly indoor space equipped at our Department premises
    corecore