54 research outputs found

    Evaluation of wavelength groups for discrimination of agricultural cover types

    Get PDF
    Multispectral scanner data in twelve spectral channels, in the wavelength range 0.46 to 11.7 mm, acquired in July 1971 for three flightlines, were analyzed by applying automatic pattern recognition techniques. These twelve spectral channels were divided into four wavelength groups (W1, W2, W3 and W4), each consisting of three wavelength channels -- with respect to their estimated probability of correct classification (P sub c) in discriminating agricultural cover types. The same analysis was also done for the data acquired in August, to investigate the effect of time on these results. The effect of deletion of each of the wavelength groups on P sub C in the subsets of one to nine channels, is given. Values of P sub C for all possible combinations of wavelength groups, in the subsets of one to eleven channels are also given

    SPHEROID DETECTION IN 2D IMAGES USING CIRCULAR HOUGH TRANSFORM

    Get PDF
    Three-dimensional endothelial cell sprouting assay (3D-ECSA) exhibits differentiation of endothelial cells into sprouting structures inside a 3D matrix of collagen I. It is a screening tool to study endothelial cell behavior and identification of angiogenesis inhibitors. The shape and size of an EC spheroid (aggregation of ~ 750 cells) is important with respect to its growth performance in presence of angiogenic stimulators. Apparently, tubules formed on malformed spheroids lack homogeneity in terms of density and length. This requires segregation of well formed spheroids from malformed ones to obtain better performance metrics. We aim to develop and validate an automated imaging software analysis tool, as a part of a High-content High throughput screening (HC-HTS) assay platform, to exploit 3D-ECSA as a differential HTS assay. We present a solution using Circular Hough Transform to detect a nearly perfect spheroid as per its circular shape in a 2D image. This successfully enables us to differentiate and separate good spheroids from the malformed ones using automated test bench

    Hybrid solutions to instantaneous MIMO blind separation and decoding: narrowband, QAM and square cases

    Get PDF
    Future wireless communication systems are desired to support high data rates and high quality transmission when considering the growing multimedia applications. Increasing the channel throughput leads to the multiple input and multiple output and blind equalization techniques in recent years. Thereby blind MIMO equalization has attracted a great interest.Both system performance and computational complexities play important roles in real time communications. Reducing the computational load and providing accurate performances are the main challenges in present systems. In this thesis, a hybrid method which can provide an affordable complexity with good performance for Blind Equalization in large constellation MIMO systems is proposed first. Saving computational cost happens both in the signal sep- aration part and in signal detection part. First, based on Quadrature amplitude modulation signal characteristics, an efficient and simple nonlinear function for the Independent Compo- nent Analysis is introduced. Second, using the idea of the sphere decoding, we choose the soft information of channels in a sphere, and overcome the so- called curse of dimensionality of the Expectation Maximization (EM) algorithm and enhance the final results simultaneously. Mathematically, we demonstrate in the digital communication cases, the EM algorithm shows Newton -like convergence.Despite the widespread use of forward -error coding (FEC), most multiple input multiple output (MIMO) blind channel estimation techniques ignore its presence, and instead make the sim- plifying assumption that the transmitted symbols are uncoded. However, FEC induces code structure in the transmitted sequence that can be exploited to improve blind MIMO channel estimates. In final part of this work, we exploit the iterative channel estimation and decoding performance for blind MIMO equalization. Experiments show the improvements achievable by exploiting the existence of coding structures and that it can access the performance of a BCJR equalizer with perfect channel information in a reasonable SNR range. All results are confirmed experimentally for the example of blind equalization in block fading MIMO systems

    Mapping Ecological Focus Areas within the EU CAP Controls Framework by Copernicus Sentinel-2 Data

    Get PDF
    Greening is a Common Agricultural Policy (CAP) subsidy that ensures that all EU farmers receiving income support produce climate and environmental benefits as part of their farming activities. To receive greening support, it is mandatory for the farmer to carry out three agricultural practices that are considered environmentally and climate friendly: (a) crop diversification; (b) maintenance of permanent meadows and pastures; and (c) presence of an Ecological Focus Area (EFA). Contributions are delivered and monitored by paying agencies (PP) that ordinarily perform administrative checks and spot checks. The latter are provided through photo-interpretation of high-resolution satellite or aerial images and, in specific cases, through local ground checks (GC) as well. In this work, stimulated by the Piemonte Regional Agency for Payments in Agriculture (ARPEA), a prototype service to support PPs’ controls within the greening CAP framework was proposed with special concern for EFA detection. The proposed approach is expected to represent a valid alternative or supporting tool for GC. It relies on the analysis of NDVI time series derived from Copernicus Sentinel-2 data. The study was conducted in the provinces of Turin, Asti and Vercelli within the Piedmont Region (NW Italy), and over 12,500 EFA fields were assessed. Since the recent National Report No. 5465 stipulates that mowing and any other soil management operation is prohibited on set-aside land designated as an EFA during the reference period (RP) between 1st March and 30th June, a time series (TS) of NDVI in the same period was generated. Once averaged at plot level, NDVI trends were modelled by a first-order polynomial, and the correspondent statistics (namely, R2, MAE and maximum residual) was computed. These were assumed to play the role of discriminants in EFA detection based on a thresholding approach (Otsu’s method), calibrated with reference to the training dataset. The threshold satisfaction was therefore tested, and, depending on the number of satisfied thresholds out of the possible three, EFA and non-EFA plots were detected with a different degree of reliability. The correspondent EFA map was generated for the area of interest and validated according to GCs as provided by the ARPEA. The results showed an overall accuracy of 84%, indicating that the approach is promising. The authors retain that this procedure represents a valid alternative (or integrating) tool for ground controls by PPs

    Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

    Full text link
    Masked Image Modeling (MIM) methods, like Masked Autoencoders (MAE), efficiently learn a rich representation of the input. However, for adapting to downstream tasks, they require a sufficient amount of labeled data since their rich features code not only objects but also less relevant image background. In contrast, Instance Discrimination (ID) methods focus on objects. In this work, we study how to combine the efficiency and scalability of MIM with the ability of ID to perform downstream classification in the absence of large amounts of labeled data. To this end, we introduce Masked Autoencoder Contrastive Tuning (MAE-CT), a sequential approach that utilizes the implicit clustering of the Nearest Neighbor Contrastive Learning (NNCLR) objective to induce abstraction in the topmost layers of a pre-trained MAE. MAE-CT tunes the rich features such that they form semantic clusters of objects without using any labels. Notably, MAE-CT does not rely on hand-crafted augmentations and frequently achieves its best performances while using only minimal augmentations (crop & flip). Further, MAE-CT is compute efficient as it requires at most 10% overhead compared to MAE re-training. Applied to large and huge Vision Transformer (ViT) models, MAE-CT excels over previous self-supervised methods trained on ImageNet in linear probing, k-NN and low-shot classification accuracy as well as in unsupervised clustering accuracy. With ViT-H/16 MAE-CT achieves a new state-of-the-art in linear probing of 82.2%

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Electrical Impedance Tomography/Spectroscopy (EITS): a Code Division Multiplexed (CDM) approach

    Get PDF
    Electrical Impedance Tomography and Spectroscopy (EITS) is a noninvasive imaging technique that creates images of cross-sections "tomos" of objects by discriminating them based on their electrical impedance. This thesis investigated and successfully confirmed the use of Code Division Multiplexing (CDM) using Gold codes in Electrical Impedance Tomography and Spectroscopy. The results obtained showed 3.5% and 6.2% errors in determining the position and size of imaged anomalies respectively, with attainable imaging speed of 462 frames/second. These results are better, compared to those reported when using Time Division Multiplexing (TDM) and Frequency Division Multiplexing (FDM).This new approach provides a more robust mode of EITS for fast changing dynamic systems by eliminating temporal data inconsistencies. Furthermore, it enables robust use of frequency difference imaging and spectroscopy in EITS by eliminating frequency data inconsistencies. In this method of imaging, electric current patterns are safely injected into the imaged object by a set of electrodes arranged in a single plane on the objects surface, for 2-Dimensional (2D) imaging. For 3-Dimensional (3D) imaging, more electrode planes are used on the objects surface. The injected currents result in measurable voltages on the objects surface. Such voltages are measured, and together with the input currents, and a Finite Element Model (FEM) of the object, used to reconstruct an impedance image of the cross-sectional contents of the imaged object. The reconstruction process involves the numerical solutions of the forward problem; using Finite Element solvers and the resulting ill-posed inverse problem using iterative Optimization or Computational Intelligence methods. This method has applications mainly in the Biomedical imaging and Process monitoring fields. The primary interests of the author are, in imaging and diagnosis of cancer, neonatal pneumonia and neurological disorders which are leading causes of death in Africa and world-wide

    HERA Phase i Limits on the Cosmic 21 cm Signal: Constraints on Astrophysics and Cosmology during the Epoch of Reionization

    Get PDF
    Recently, the Hydrogen Epoch of Reionization Array (HERA) has produced the experiment's first upper limits on the power spectrum of 21 cm fluctuations at z ∼ 8 and 10. Here, we use several independent theoretical models to infer constraints on the intergalactic medium (IGM) and galaxies during the epoch of reionization from these limits. We find that the IGM must have been heated above the adiabatic-cooling threshold by z ∼ 8, independent of uncertainties about IGM ionization and the radio background. Combining HERA limits with complementary observations constrains the spin temperature of the z ∼ 8 neutral IGM to 27 K 630 K (2.3 K 640 K) at 68% (95%) confidence. They therefore also place a lower bound on X-ray heating, a previously unconstrained aspects of early galaxies. For example, if the cosmic microwave background dominates the z ∼ 8 radio background, the new HERA limits imply that the first galaxies produced X-rays more efficiently than local ones. The z ∼ 10 limits require even earlier heating if dark-matter interactions cool the hydrogen gas. If an extra radio background is produced by galaxies, we rule out (at 95% confidence) the combination of high radio and low X-ray luminosities of L r,ν /SFR > 4 × 1024 W Hz-1 yr and L X /SFR < 7.6 × 1039 erg s-1 yr. The new HERA upper limits neither support nor disfavor a cosmological interpretation of the recent Experiment to Detect the Global EOR Signature (EDGES) measurement. The framework described here provides a foundation for the interpretation of future HERA results

    Clustering-based k-nearest neighbor classification for large-scale data with neural codes representation

    Get PDF
    While standing as one of the most widely considered and successful supervised classification algorithms, the k-nearest Neighbor (kNN) classifier generally depicts a poor efficiency due to being an instance-based method. In this sense, Approximated Similarity Search (ASS) stands as a possible alternative to improve those efficiency issues at the expense of typically lowering the performance of the classifier. In this paper we take as initial point an ASS strategy based on clustering. We then improve its performance by solving issues related to instances located close to the cluster boundaries by enlarging their size and considering the use of Deep Neural Networks for learning a suitable representation for the classification task at issue. Results using a collection of eight different datasets show that the combined use of these two strategies entails a significant improvement in the accuracy performance, with a considerable reduction in the number of distances needed to classify a sample in comparison to the basic kNN rule.This work has been supported by the Spanish Ministerio de Economía y Competitividad through Project TIMuL (No. TIN2013-48152-C2-1-R supported by EU FEDER funds), the Spanish Ministerio de Educación, Cultura y Deporte through an FPU Fellowship (Ref. AP2012–0939), and by the Universidad de Alicante through the FPU program (UAFPU2014–5883 ) and the Instituto Universitario de Investigación Informática (IUII)
    • …
    corecore