79 research outputs found

    Detecting Irregularities in Images and in Video

    Full text link

    Free-hand sketch recognition by multi-kernel feature learning

    Get PDF
    Abstract Free-hand sketch recognition has become increasingly popular due to the recent expansion of portable touchscreen devices. However, the problem is non-trivial due to the complexity of internal structures that leads to intra-class variations, coupled with the sparsity in visual cues that results in inter-class ambiguities. In order to address the structural complexity, a novel structured representation for sketches is proposed to capture the holistic structure of a sketch. Moreover, to overcome the visual cue sparsity problem and therefore achieve state-of-the-art recognition performance, we propose a Multiple Kernel Learning (MKL) framework for sketch recognition, fusing several features common to sketches. We evaluate the performance of all the proposed techniques on the most diverse sketch dataset to date (Mathias et al., 2012), and offer detailed and systematic analyses of the performance of different features and representations, including a breakdown by sketch-super-category. Finally, we investigate the use of attributes as a high-level feature for sketches and show how this complements low-level features for improving recognition performance under the MKL framework, and consequently explore novel applications such as attribute-based retrieval

    Abnormal event detection in crowded scenes using sparse representation

    Full text link
    We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k2) to O(k) where k is the number of samples. We use the columnwise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method.Accepted versio

    Color-to-Grayscale: Does the Method Matter in Image Recognition?

    Get PDF
    In image recognition it is often assumed the method used to convert color images to grayscale has little impact on recognition performance. We compare thirteen different grayscale algorithms with four types of image descriptors and demonstrate that this assumption is wrong: not all color-to-grayscale algorithms work equally well, even when using descriptors that are robust to changes in illumination. These methods are tested using a modern descriptor-based image recognition framework, on face, object, and texture datasets, with relatively few training instances. We identify a simple method that generally works best for face and object recognition, and two that work well for recognizing textures

    Analisis Pengendalian Persediaan Bahan Baku Pupuk Organik Dengan Metode Eoq

    Get PDF
    Persediaan merupakan salah satu pos modal kerja yang cukup penting karena kebanyakan modal usaha perusahaan berasal dari persediaan. Pengendalian persediaan yang dilakukan dengan baik, bias memudahkan perusahaan untuk melakukan proses produksi. Persediaan bahan baku bertujuan untuk memenuhi kebutuhan proses produksi pada waktu yang akan dating. Kebutuhan bahan baku dalam produksi diatur sesuai dengan jumlah bahan baku yang dibutuhkan, ketersediaan bahan baku dari pemasok, penyimpanan dan pemeliharaan bahan baku, serta permintaan dari pelanggan. Penelitian ini bertujuan untuk mengidentifikasi sistem persediaan bahan baku pupuk organik yang dilakukan oleh PT. Gresik Cipta Sejahtera, serta mengalisis bahan baku dengan metode EOQ. Pendekatan yang dilakukan dalam penelitian ini adalah dengan analisis kuantitatif. Tujuan dari analisis kunatitatif ini untuk menemukan pengendalian persediaan yang ekonomis. Penelitian yang dilaksanakan berada di PT. Gresik Cipta Sejahtera. Penentuan lokasi penelitian dilakukan secara sengaja, karena perusahaan ini salah satu anak perusahaan PT. Gresik. Peneliti menentukan manajer pabrik sebagai responden karena memiliki kriteria yang sesuai dalam penelitian seperti pemahaman terhadap bahan baku pupuk, biaya pemesanan bahan baku, dan penggunaan bahan baku. Teknik pengumpulan data dalam penelitian ini dilakukan dengan wawancara menggunakan kuisioner, observasi, serta dokumentasi. Metode analisis data yang digunakan adalah analisis pembelian bahan baku, analisis EOQ, menentukan safety stok, lead time, serta menentukan nilai reorder point. Hasil dari penelitian ini menunjukkan metode yang pengendalian bahan baku yang diterapkan oleh perusahaan selama ini adalah just in time yang memiliki kelemahan. Beberapa kelemahan tersebut antara lain ketergantungan terhadap petani dimana perusahaan membutuhkan pemesanan ulang bahan baku karena terjadi kasus salah produksi, bahan baku yang rusak, hingga terbuangnya bahan baku. Berdasarkan hasil analisis pengendalian bahan baku kompos ayam, kompos sapi, dan ampas tebu dengan menggunakan metode EOQ maka dapat ditemukan kuantitas pemesanan optimal. Pemesanan ampas tebu oleh perusahaan akan mendapatkan pengeluaran yang paling efisien jika melakukan pemesanan sebanyak 75,78 kali dalam sebulan dengan jumlah 312,11 kg. Untuk safety stock ampas tebu seharusnya dari Januari 2017 hingga Desember 2017 adalah 81.644,79 kg, lalu untuk perusahaan harus melakukan pemesanan ampas tebu kembali jika persediaan di gudang sebanyak 223.471,79 kg. Untuk kompos ayam perusahaan akan mendapatkan pengeluaran yang paling efisien jika melakukan pemesanan kompos ayam sebanyak 184,64 kali dalam sebulan dengan jumlah 760,48 kg. Besarnya safety stock kompos ayam yang seharusnya dimiliki oleh PT. Gresik Cipta Sejahtera dari bulan Januari 2017 hingga Desember 2017 275.732,78 kg. Lalu kompos ayam harus harus dipesan kembali jika persediaan di gudang tinggal 696.971,53 kg. Untuk kompos sapi perusahaan akan mendapatkan pengeluaran yang paling efisien jika melakukan pemesanan kompos ayam sebanyak 182,66 kali dalam sebulan dengan jumlah 655,84 kg. Lalu besarnya safety stock kompos sapi yang seharusnya dimiliki oleh PT. Gresik CiptaSejahtera dari bulan Januari 2017 hingga Desember 2017 adalah 218.266,19 kg. Lalu kompos sapi kembali harus dipesan kembali jika persediaan di gudang tinggal 560.080,47 kg. Lead Time dari ketiga bahan baku bervariasi sekitar 1 hingga 3 hari

    Detecting irregularities in images and in video

    No full text
    We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term “irregular ” depends on the context in which the “regular ” or “valid ” are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (“the query”) using chunks of data (“pieces of puzzle”) extracted from previous visual examples (“the database”). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely/suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behavior recognition and for automatic visual inspection for quality assurance.
    corecore