22 research outputs found

    A Medium Survey of the Hard X-Ray Sky with ASCA. II.: The Source's Broad Band X-Ray Spectral Properties

    Full text link
    A complete sample of 60 serendipitous hard X-ray sources with flux in the range 1×1013\sim 1 \times 10^{-13} \ecs to 4×1012\sim 4 \times 10^{-12} \ecs (2 - 10 keV), detected in 87 ASCA GIS2 images, was recently presented in literature. Using this sample it was possible to extend the description of the 2-10 keV LogN(>S)-LogS down to a flux limit of 6×1014\sim 6\times 10^{-14} \ecs (the faintest detectable flux), resolving about a quarter of the Cosmic X-ray Background. In this paper we have combined the ASCA GIS2 and GIS3 data of these sources to investigate their X-ray spectral properties using the "hardness" ratios and the "stacked" spectra method. Because of the sample statistical representativeness, the results presented here, that refer to the faintest hard X-ray sources that can be studied with the current instrumentation, are relevant to the understanding of the CXB and of the AGN unification scheme.Comment: 28 pages plus 6 figures, LaTex manuscript, Accepted for publication in the Astrophysical Journal, Figure 5 can retrieved via anonymous ftp at ftp://ftp.brera.mi.astro.it/pub/ASCA/paper2/fig5.ps.g

    A new approach to image-based estimation of food volume

    No full text
    A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people's health. However, manual reporting of food intake has been shown to be inaccurate and often impractical. This paper presents a new approach to food intake quantity estimation using image-based modeling. The modeling method consists of three steps: firstly, a short video of the food is taken by the user's smartphone. From such a video, six frames are selected based on the pictures' viewpoints as determined by the smartphone's orientation sensors. Secondly, the user marks one of the frames to seed an interactive segmentation algorithm. Segmentation is based on a Gaussian Mixture Model alongside the graph-cut algorithm. Finally, a customized image-based modeling algorithm generates a point-cloud to model the food. At the same time, a stochastic object-detection method locates a checkerboard used as size/ground reference. The modeling algorithm is optimized such that the use of six input images still results in an acceptable computation cost. In our evaluation procedure, we achieved an average accuracy of 92%on a test set that includes images of different kinds of pasta and bread, with an average processing time of about 23 s

    Automatic diet monitoring: a review of computer vision and wearable sensor-based methods

    No full text
    Food intake and eating habits have a significant impact on people’s health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal

    Food image recognition using very deep convolutional networks

    No full text
    We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three wellknown food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88:28%, 81:45%, and 76:17% as top-1 accuracy and 96:88%, 97:27%, and 92:58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors'. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems
    corecore