767 research outputs found

    Lifting Wavelet Based Cognitive Vision System

    Get PDF
    This paper presents a cognitive vision system based on the learning of lifting wavelets. The learning process consists of four steps: 1. Extract training and query object images automatically from adjacent video frames using our proposed cosine-maximization method; 2. Compute autocorrelation vectors from the extracted training images, and their discriminant vectors by linear discriminant analysis; 3. Map the autocorrelation vectors onto the discriminant vector space to obtain feature vectors; 4. Learn lifting parameters in the feature vectors using the idea of discriminant analysis. The recognition of a query object is performed by measuring cosine distance between its feature vector and the feature vectors for training object images. Our experimental results on vehicle types recognition show that the proposed system performs better than the discriminant analysis of original images

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference

    Brain–computer interface and assist-as-needed model for upper limb robotic arm

    Get PDF
    https://journals.sagepub.com/doi/10.1177/1687814019875537Post-stroke paralysis, whereby subjects loose voluntary control over muscle actuation, is one of the main causes of disability. Repetitive physical therapy can reinstate lost motions and strengths through neuroplasticity. However, manually delivered therapies are becoming ineffective due to scarcity of therapists, subjectivity in the treatment, and lack of patient motivation. Robot-assisted physical therapy is being researched these days to impart an evidence-based systematic treatment. Recently, intelligent controllers and brain–computer interface are proposed for rehabilitation robots to encourage patient participation which is the key to quick recovery. In the present work, a brain–computer interface and assist-as-needed training paradigm have been proposed for an upper limb rehabilitation robot. The brain–computer interface system is implemented with the use of electroencephalography sensor; moreover, backdrivability in the actuator has been achieved with the use of assist-as-needed control approach, which allows subjects to move the robot actively using their limited motions and strengths. The robot only assists for the remaining course of trajectory which subjects are unable to perform themselves. The robot intervention point is obtained from the patient’s intent which is captured through brain–computer interface. Problems encountered during the practical implementation of brain–computer interface and achievement of backdrivability in the actuator have been discussed and resolved

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Medical image enhancement

    Get PDF
    Each image acquired from a medical imaging system is often part of a two-dimensional (2-D) image set whose total presents a three-dimensional (3-D) object for diagnosis. Unfortunately, sometimes these images are of poor quality. These distortions cause an inadequate object-of-interest presentation, which can result in inaccurate image analysis. Blurring is considered a serious problem. Therefore, “deblurring” an image to obtain better quality is an important issue in medical image processing. In our research, the image is initially decomposed. Contrast improvement is achieved by modifying the coefficients obtained from the decomposed image. Small coefficient values represent subtle details and are amplified to improve the visibility of the corresponding details. The stronger image density variations make a major contribution to the overall dynamic range, and have large coefficient values. These values can be reduced without much information loss

    3D Object Reconstruction from Imperfect Depth Data Using Extended YOLOv3 Network

    Get PDF
    State-of-the-art intelligent versatile applications provoke the usage of full 3D, depth-based streams, especially in the scenarios of intelligent remote control and communications, where virtual and augmented reality will soon become outdated and are forecasted to be replaced by point cloud streams providing explorable 3D environments of communication and industrial data. One of the most novel approaches employed in modern object reconstruction methods is to use a priori knowledge of the objects that are being reconstructed. Our approach is different as we strive to reconstruct a 3D object within much more difficult scenarios of limited data availability. Data stream is often limited by insufficient depth camera coverage and, as a result, the objects are occluded and data is lost. Our proposed hybrid artificial neural network modifications have improved the reconstruction results by 8.53 which allows us for much more precise filling of occluded object sides and reduction of noise during the process. Furthermore, the addition of object segmentation masks and the individual object instance classification is a leap forward towards a general-purpose scene reconstruction as opposed to a single object reconstruction task due to the ability to mask out overlapping object instances and using only masked object area in the reconstruction process

    A Novel Hybrid CNN Denoising Technique (HDCNN) for Image Denoising with Improved Performance

    Get PDF
    Photo denoising has been tackled by deep convolutional neural networks (CNNs) with powerful learning capabilities. Unfortunately, some CNNs perform badly on complex displays because they only train one deep network for their image blurring models. We recommend a hybrid CNN denoising technique (HDCNN) to address this problem. An HDCNN consists of a dilated interfere with, a RepVGG block, an attribute sharpening interferes with, as well as one inversion. To gather more context data, DB incorporates a stretched convolution, data sequential normalization (BN), shared convergence, and the activating function called the ReLU. Convolution, BN, and reLU are combined in parallel by RVB to obtain complimentary width characteristics. The RVB's refining characteristics are used to refine FB, which is then utilized to collect more precise data. To create a crisp image, a single convolution works in conjunction with a residual learning process. These crucial elements enable the HDCNN to carry out visual denoising efficiently. The suggested HDCNN has a good denoising performance in open data sets, according to experiments
    corecore