1,794 research outputs found

    Cooperative Object Segmentation and Behavior Inference inImage Sequences

    Get PDF
    In this paper, we propose a general framework for fusing bottom-up segmentation with top-down object behavior inference over an image sequence. This approach is beneficial for both tasks, since it enables them to cooperate so that knowledge relevant to each can aid in the resolution of the other, thus enhancing the final result. In particular, the behavior inference process offers dynamic probabilistic priors to guide segmentation. At the same time, segmentation supplies its results to the inference process, ensuring that they are consistent both with prior knowledge and with new image information. The prior models are learned from training data and they adapt dynamically, based on newly analyzed images. We demonstrate the effectiveness of our framework via particular implementations that we have employed in the resolution of two hand gesture recognition applications. Our experimental results illustrate the robustness of our joint approach to segmentation and behavior inference in challenging conditions involving complex backgrounds and occlusions of the target objec

    A collaborative approach to image segmentation and behavior recognition from image sequences

    Get PDF
    Visual behavior recognition is currently a highly active research area. This is due both to the scientific challenge posed by the complexity of the task, and to the growing interest in its applications, such as automated visual surveillance, human-computer interaction, medical diagnosis or video indexing/retrieval. A large number of different approaches have been developed, whose complexity and underlying models depend on the goals of the particular application which is targeted. The general trend followed by these approaches is the separation of the behavior recognition task into two sequential processes. The first one is a feature extraction process, where features which are considered relevant for the recognition task are extracted from the input image sequence. The second one is the actual recognition process, where the extracted features are classified in terms of the pre-defined behavior classes. One problematic issue of such a two-pass procedure is that the recognition process is highly dependent on the feature extraction process, and does not have the possibility to influence it. Consequently, a failure of the feature extraction process may impair correct recognition. The focus of our thesis is on the recognition of single object behavior from monocular image sequences. We propose a general framework where feature extraction and behavior recognition are performed jointly, thereby allowing the two tasks to mutually improve their results through collaboration and sharing of existing knowledge. The intended collaboration is achieved by introducing a probabilistic temporal model based on a Hidden Markov Model (HMM). In our formulation, behavior is decomposed into a sequence of simple actions and each action is associated with a different probability of observing a particular set of object attributes within the image at a given time. Moreover, our model includes a probabilistic formulation of attribute (feature) extraction in terms of image segmentation. Contrary to existing approaches, segmentation is achieved by taking into account the relative probabilities of each action, which are provided by the underlying HMM. In this context, we solve the joint problem of attribute extraction and behavior recognition by developing a variation of the Viterbi decoding algorithm, adapted to our model. Within the algorithm derivation, we translate the probabilistic attribute extraction formulation into a variational segmentation model. The proposed model is defined as a combination of typical image- and contour-dependent energy terms with a term which encapsulates prior information, offered by the collaborating recognition process. This prior information is introduced by means of a competition between multiple prior terms, corresponding to the different action classes which may have generated the current image. As a result of our algorithm, the recognized behavior is represented as a succession of action classes corresponding to the images in the given sequence. Furthermore, we develop an extension of our general framework, that allows us to deal with a common situation encountered in applications. Namely, we treat the case where behavior is specified in terms of a discrete set of behavior types, made up of different successions of actions, which belong to a shared set of action classes. Therefore, the recognition of behavior requires the estimation of the most probable behavior type and of the corresponding most probable succession of action classes which explains the observed image sequence. To this end, we modify our initial model and develop a corresponding Viterbi decoding algorithm. Both our initial framework and its extension are defined in general terms, involving several free parameters which can be chosen so as to obtain suitable implementations for the targeted applications. In this thesis, we demonstrate the viability of the proposed framework by developing particular implementations for two applications. Both applications belong to the field of gesture recognition and concern finger-counting and finger-spelling. For the finger-counting application, we use our original framework, whereas for the finger-spelling application, we use its proposed extension. For both applications, we instantiate the free parameters of the respective frameworks with particular models and quantities. Then, we explain the training of the obtained models from specific training data. Finally, we present the results obtained by testing our trained models on new image sequences. The test results show the robustness of our models in difficult cases, including noisy images, occlusions of the gesturing hand and cluttered background. For the finger-spelling application, a comparison with the traditional sequential approach to image segmentation and behavior recognition illustrates the superiority of our collaborative model

    Overexpression of the Tomato Pollen Receptor Kinase LePRK1 Rewires Pollen Tube Growth to a Blebbing Mode

    Get PDF
    The tubular growth of a pollen tube cell is crucial for the sexual reproduction of flowering plants. LePRK1 is a pollen-specific and plasma membrane–localized receptor-like kinase from tomato (Solanum lycopersicum). LePRK1 interacts with another receptor, LePRK2, and with KINASE PARTNER PROTEIN (KPP), a Rop guanine nucleotide exchange factor. Here, we show that pollen tubes overexpressing LePRK1 or a truncated LePRK1 lacking its extracellular domain (LePRK1ΔECD) have enlarged tips but also extend their leading edges by producing “blebs.” Coexpression of LePRK1 and tomato PLIM2a, an actin bundling protein that interacts with KPP in a Ca2+-responsive manner, suppressed these LePRK1 overexpression phenotypes, whereas pollen tubes coexpressing KPP, LePRK1, and PLIM2a resumed the blebbing growth mode. We conclude that overexpression of LePRK1 or LePRK1ΔECD rewires pollen tube growth to a blebbing mode, through KPP- and PLIM2a-mediated bundling of actin filaments from tip plasma membranes. Arabidopsis thaliana pollen tubes expressing LePRK1ΔECD also grew by blebbing. Our results exposed a hidden capability of the pollen tube cell: upon overexpression of a single membrane-localized molecule, LePRK1 or LePRK1ΔECD, it can switch to an alternative mechanism for extension of the leading edge that is analogous to the blebbing growth mode reported for Dictyostelium and for Drosophila melanogaster stem cells.Fil: Gui, Cai Ping. Chinese Academy of Sciences; RepĂșblica de ChinaFil: Dong, Xin. Chinese Academy of Sciences; RepĂșblica de ChinaFil: Liu, Hai Kuan. Chinese Academy of Sciences; RepĂșblica de ChinaFil: Huang, Wei Jie. Chinese Academy of Sciences; RepĂșblica de ChinaFil: Zhang, Dong. Chinese Academy of Sciences; RepĂșblica de ChinaFil: Wang, Shu Jie. Chinese Academy of Sciences; RepĂșblica de ChinaFil: Barberini, MarĂ­a Laura. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Instituto de Investigaciones en IngenierĂ­a GenĂ©tica y BiologĂ­a Molecular "Dr. HĂ©ctor N. Torres"; ArgentinaFil: Gao, Xiao Yan. Chinese Academy of Sciences; RepĂșblica de ChinaFil: Muschietti, Jorge Prometeo. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Instituto de Investigaciones en IngenierĂ­a GenĂ©tica y BiologĂ­a Molecular "Dr. HĂ©ctor N. Torres"; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Biodiversidad y BiologĂ­a Experimental; ArgentinaFil: McCormick, Sheila. University of California at Berkeley; Estados UnidosFil: Tang, Wei Hua. Chinese Academy of Sciences; RepĂșblica de China. University of California at Berkeley; Estados Unido

    Cooperative Object Segmentation and Behavior Inference in Image Sequences

    Get PDF
    In this paper, we propose a general framework for fusing bottom-up segmentation with top-down object behavior inference over an image sequence. This approach is beneficial for both tasks, since it enables them to cooperate so that knowledge relevant to each can aid in the resolution of the other, thus enhancing the final result. In particular, the behavior inference process offers dynamic probabilistic priors to guide segmentation. At the same time, segmentation supplies its results to the inference process, ensuring that they are consistent both with prior knowledge and with new image information. The prior models are learned from training data and they adapt dynamically, based on newly analyzed images. We demonstrate the effectiveness of our framework via particular implementations that we have employed in the resolution of two hand gesture recognition applications. Our experimental results illustrate the robustness of our joint approach to segmentation and behavior inference in challenging conditions involving complex backgrounds and occlusions of the target object

    Finger-spelling Recognition within a Collaborative Segmentation/Behavior Inference Framework

    Get PDF
    We introduce a new approach for finger-spelling recognition from video sequences, relying on the collaboration between the feature extraction and behavior inference processes. The inference process dynamically guides the segmentation- based feature extraction process towards the most likely location of the signer's hand (based on its attributes). Reciprocally, segmentation offers to the inference process hand object attributes extracted from each image, combining the received guidance and new image information. This collaboration is beneficial for both processes, yielding not only accurate segmentations of the spelling hand, but also a robust recognition scheme, which can cope with complex backgrounds, typical of real life situations

    Perencanaan usaha Cocoa Oat Silky Pudding “Coasilk” dengan kapasitas produksi 300 cup per hari

    Get PDF
    Cocoa oat silky pudding dengan merek “Coasilk” merupakan produk silky pudding berbasis oat ‘milk’ dengan penambahan pasta cokelat. “Coasilk” bertujuan untuk memberikan pilihan makanan penutup vegan kepada konsumen vegan atau konsumen dengan alergi laktosa. Usaha “Coasilk” direncanakan memiliki kapasitas produksi 300 cup per hari dan didirikan di Jalan Rungkut Mejoyo Selatan 1 No. 18, Surabaya. “Coasilk” diproduksi dalam skala rumah tangga dengan tiga orang tenaga kerja sehingga tergolong dalam Usaha Mikro, Kecil, dan Menengah (UMKM). Bahan baku “Coasilk” adalah air, rolled oat, bubuk kakao, gula pasir, kappa karagenan, perisa rum, dan biji selasih. Proses pengolahan “Coasilk” terdiri dari penimbangan, perendaman rolled oat, penyaringan, pencampuran, pemanasan, pengemasan, dan pendinginan. Kemasan primer yang digunakan untuk “Coasilk” adalah cup PET 240 mL. Harga jual perusahaan untuk produk cocoa oat silky pudding “Coasilk” adalah Rp 9.500,- per cup sedangkan harga jual distributor adalah maksimal Rp. 12.000,- dengan netto 150 mL. Proses distribusi dilakukan secara langsung dengan konsumen datang ke tempat produksi “Coasilk” atau menggunakan jasa kurir. Pemasaran produk dilakukan melalui media sosial. Usaha “Coasilk” memiliki laju pengembalian modal setelah pajak (ROR) sebesar 171,06% dengan Minimal Attractive Rate of Return (MARR) sebesar 14,37%. Pengembalian modal setelah pajak membutuhkan 6,88 bulan. Break Even Point (BEP) yang diperoleh adalah 58,26%

    A Probabilistic Temporal Model for Joint Attribute Extraction and Behavior Recognition

    Get PDF
    The focus of this paper is on the recognition of single object behavior from monocular image sequences. The general literature trend is to perform behavior recognition separately after an initial phase of feature/attribute extraction. We propose a framework where behavior recognition is performed jointly with attribute extraction, allowing the two tasks to mutually improve their results. To this end, we express the joint recognition / extraction problem in terms of a probabilistic temporal model, allowing its resolution via a variation of the Viterbi decoding algorithm, adapted to our model. Within the algorithm derivation, we translate probabilistic attribute extraction into a variational segmentation scheme. We demonstrate the viability of the proposed framework through a particular implementation for finger-spelling recognition. The obtained results illustrate the superiority of our collaborative model with respect to the traditional approach, where attribute extraction and behavior recognition are performed sequentially.LTS

    The role of apparent diffusion coefficient (ADC) in the evaluation of lymph node status in patients with locally advanced cervical cancer : our experience and a review

    Get PDF
    Purpose: To evaluate the role of apparent diffusion coefficient (ADC) value measurement in the diagnosis of metastatic lymph nodes (LNs) in patients with locally advanced cervical cancer (LACC) and to present a systematic review of the literature. Material and methods: Magnetic resonance imaging (MRI) exams of patients with LACC were retrospectively eva luated. Mean ADC, relative ADC (rADC), and correct ADC (cADC) values of enlarged LNs were measured and compared between positron emission tomography (PET)-positive and PET-negative LNs. Comparisons were made using the Mann-Whitney U-test and Student’s t-test. ROC curves were generated for each parameter to identify the optimal cut-off value for differentiation of the LNs. A systematic search in the literature was performed, exploring several databases, including PubMed, Scopus, the Cochrane library, and Embase. Results: A total of 105 LNs in 34 patients were analysed. The median ADC value of PET-positive LNs (0.907 × 10-3 mm2/s [0.780-1.080]) was lower than that in PET-negative LNs (1.275 × 10-3 mm2/s [1.063-1.525]) (p < 0.05). rADC and cADC values were lower in PET-positive LNs (rADC: 0.120 × 10-3 mm2/s [–0.060-0.270]; cADC: 1.130 [0.980-1.420]) than in PET-negative LNs (rADC: 0.435 × 10-3 mm2/s [0.225-0.673]; cADC: 1.615 [1.210-1.993]) LNs (p < 0.05). ADC showed the highest area under the curve (AUC 0.808). Conclusions: Mean ADC, rADC, and cADC were significantly lower in the PET-positive group than in the PET-negative group. The ADC cut-off value of 1.149 × 10-3 mm2/s showed the highest sensitivity. These results confirm the usefulness of ADC in differentiating metastatic from non-metastatic LNs in LACC
    • 

    corecore