2,611 research outputs found

    Efficient Privacy Preserving Viola-Jones Type Object Detection via Random Base Image Representation

    Full text link
    A cloud server spent a lot of time, energy and money to train a Viola-Jones type object detector with high accuracy. Clients can upload their photos to the cloud server to find objects. However, the client does not want the leakage of the content of his/her photos. In the meanwhile, the cloud server is also reluctant to leak any parameters of the trained object detectors. 10 years ago, Avidan & Butman introduced Blind Vision, which is a method for securely evaluating a Viola-Jones type object detector. Blind Vision uses standard cryptographic tools and is painfully slow to compute, taking a couple of hours to scan a single image. The purpose of this work is to explore an efficient method that can speed up the process. We propose the Random Base Image (RBI) Representation. The original image is divided into random base images. Only the base images are submitted randomly to the cloud server. Thus, the content of the image can not be leaked. In the meanwhile, a random vector and the secure Millionaire protocol are leveraged to protect the parameters of the trained object detector. The RBI makes the integral-image enable again for the great acceleration. The experimental results reveal that our method can retain the detection accuracy of that of the plain vision algorithm and is significantly faster than the traditional blind vision, with only a very low probability of the information leakage theoretically.Comment: 6 pages, 3 figures, To appear in the proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Jul 10, 2017 - Jul 14, 2017, Hong Kong, Hong Kon

    An equilibrium-conserving taxation scheme for income from capital

    Full text link
    Under conditions of market equilibrium, the distribution of capital income follows a Pareto power law, with an exponent that characterizes the given equilibrium. Here, a simple taxation scheme is proposed such that the post-tax capital income distribution remains an equilibrium distribution, albeit with a different exponent. This taxation scheme is shown to be progressive, and its parameters can be simply derived from (i) the total amount of tax that will be levied, (ii) the threshold selected above which capital income will be taxed and (iii) the total amount of capital income. The latter can be obtained either by using Piketty's estimates of the capital/labor income ratio or by fitting the initial Pareto exponent. Both ways moreover provide a check on the amount of declared income from capital.Comment: 4 pages, 2 figure

    AICropCAM: Deploying classification, segmentation, detection, and counting deep-learning models for crop monitoring on the edge

    Get PDF
    Precision Agriculture (PA) promises to meet the future demands for food, feed, fiber, and fuel while keeping their production sustainable and environmentally friendly. PA relies heavily on sensing technologies to inform site-specific decision supports for planting, irrigation, fertilization, spraying, and harvesting. Traditional point-based sensors enjoy small data sizes but are limited in their capacity to measure plant and canopy parameters. On the other hand, imaging sensors can be powerful in measuring a wide range of these parameters, especially when coupled with Artificial Intelligence. The challenge, however, is the lack of computing, electric power, and connectivity infrastructure in agricultural fields, preventing the full utilization of imaging sensors. This paper reported AICropCAM, a field-deployable imaging framework that integrated edge image processing, Internet of Things (IoT), and LoRaWAN for low-power, long-range communication. The core component of AICropCAM is a stack of four Deep Convolutional Neural Networks (DCNN) models running sequentially: CropClassiNet for crop type classification, CanopySegNet for canopy cover quantification, PlantCountNet for plant and weed counting, and InsectNet for insect identification. These DCNN models were trained and tested with \u3e43,000 field crop images collected offline. AICropCAM was embodied on a distributed wireless sensor network with its sensor node consisting of an RGB camera for image acquisition, a Raspberry Pi 4B single-board computer for edge image processing, and an Arduino MKR1310 for LoRa communication and power management. Our testing showed that the time to run the DCNN models ranged from 0.20 s for InsectNet to 20.20 s for CanopySegNet, and power consumption ranged from 3.68 W for InsectNet to 5.83 W for CanopySegNet. The classification model CropClassiNet reported 94.5 % accuracy, and the segmentation model CanopySegNet reported 92.83 % accuracy. The two object detection models PlantCountNet and InsectNet reported mean average precision of 0.69 and 0.02 for the test images. Predictions from the DCNN models were transmitted to the ThingSpeak IoT platform for visualization and analytics. We concluded that AICropCAM successfully implemented image processing on the edge, drastically reduced the amount of data being transmitted, and could satisfy the real-time need for decision-making in PA. AICropCAM can be deployed on moving platforms such as center pivots or drones to increase its spatial coverage and resolution to support crop monitoring and field operations

    An sTGC Prototype Readout System for ATLAS New-Small-Wheel Upgrade

    Full text link
    This paper presents a readout system designed for testing the prototype of Small-Strip Thin Gap Chamber (sTGC), which is one of the main detector technologies used for ATLAS New-Small-Wheel Upgrade. This readout system aims at testing one full-size sTGC quadruplet with cosmic muon triggers

    Bagging Improves the Performance of Deep Learning-Based Semantic Segmentation with Limited Labeled Images: A Case Study of Crop Segmentation for High-Throughput Plant Phenotyping

    Get PDF
    Advancements in imaging, computer vision, and automation have revolutionized various fields, including field-based high-throughput plant phenotyping (FHTPP). This integration allows for the rapid and accurate measurement of plant traits. Deep Convolutional Neural Networks (DCNNs) have emerged as a powerful tool in FHTPP, particularly in crop segmentation—identifying crops from the background—crucial for trait analysis. However, the effectiveness of DCNNs often hinges on the availability of large, labeled datasets, which poses a challenge due to the high cost of labeling. In this study, a deep learning with bagging approach is introduced to enhance crop segmentation using high-resolution RGB images, tested on the NU-Spidercam dataset from maize plots. The proposed method outperforms traditional machine learning and deep learning models in prediction accuracy and speed. Remarkably, it achieves up to 40% higher Intersection-over-Union (IoU) than the threshold method and 11% over conventional machine learning, with significantly faster prediction times and manageable training duration. Crucially, it demonstrates that even small labeled datasets can yield high accuracy in semantic segmentation. This approach not only proves effective for FHTPP but also suggests potential for broader application in remote sensing, offering a scalable solution to semantic segmentation challenges. This paper is accompanied by publicly available source code

    Predicting Aesthetic Score Distribution through Cumulative Jensen-Shannon Divergence

    Full text link
    Aesthetic quality prediction is a challenging task in the computer vision community because of the complex interplay with semantic contents and photographic technologies. Recent studies on the powerful deep learning based aesthetic quality assessment usually use a binary high-low label or a numerical score to represent the aesthetic quality. However the scalar representation cannot describe well the underlying varieties of the human perception of aesthetics. In this work, we propose to predict the aesthetic score distribution (i.e., a score distribution vector of the ordinal basic human ratings) using Deep Convolutional Neural Network (DCNN). Conventional DCNNs which aim to minimize the difference between the predicted scalar numbers or vectors and the ground truth cannot be directly used for the ordinal basic rating distribution. Thus, a novel CNN based on the Cumulative distribution with Jensen-Shannon divergence (CJS-CNN) is presented to predict the aesthetic score distribution of human ratings, with a new reliability-sensitive learning method based on the kurtosis of the score distribution, which eliminates the requirement of the original full data of human ratings (without normalization). Experimental results on large scale aesthetic dataset demonstrate the effectiveness of our introduced CJS-CNN in this task.Comment: AAAI Conference on Artificial Intelligence (AAAI), New Orleans, Louisiana, USA. 2-7 Feb. 201

    Temporal dynamics of maize plant growth, water use, and leaf water content using automated high throughput RGB and hyperspectral imaging

    Get PDF
    Automated collection of large scale plant phenotype datasets using high throughput imaging systems has the potential to alleviate current bottlenecks in data-driven plant breeding and crop improvement. In this study, we demonstrate the characterization of temporal dynamics of plant growth and water use, and leaf water content of two maize genotypes under two different water treatments. RGB (Red Green Blue) images are processed to estimate projected plant area, which are correlated with destructively measured plant shoot fresh weight (FW), dry weight (DW) and leaf area. Estimated plant FW and DW, along with pot weights, are used to derive daily plant water consumption and water use efficiency (WUE) of the individual plants. Hyperspectral images of plants are processed to extract plant leaf reflectance and correlate with leaf water content (LWC). Strong correlations are found between projected plant area and all three destructively measured plant parameters (R2 \u3e 0.95) at early growth stages. The correlations become weaker at later growth stages due to the large difference in plant structure between the two maize genotypes. Daily water consumption (or evapotranspiration) is largely determined by water treatment, whereas WUE (or biomass accumulation per unit of water used) is clearly determined by genotype, indicating a strong genetic control of WUE. LWC is successfully predicted with the hyperspectral images for both genotypes (R2 = 0.81 and 0.92). Hyperspectral imaging can be a very powerful tool to phenotype biochemical traits of the whole maize plants, complementing RGB for plant morphological trait analysis
    • …
    corecore