8,505 research outputs found

    Human factors consideration in the interaction process with virtual environment

    Get PDF
    Newrequirements are needed by industry for computer aided design (CAD) data. Some techniques of CAD data management and the computer power unit capabilities enable an extraction of a virtual mock-up for an interactive use. CAD data may also be distributed and shared by different designers in various parts of the world (in the same company and with subcontractors). The use of digital mock-up is not limited to the mechanical design of the product but is dedicated to a maximum number of trades in industry. One of the main issues is to enable the evaluation of the product without any physical representation of the product but based on its virtual representation. In that objective, most of main actors in industry domain use virtual reality technologies. These technologies consist basically in enabling the designer to perceive the product in design process. This perception has to be rendered to guarantee that the evaluation process is done as in a real condition. The perception is the fruit of alchemy between the user and the VR technologies. Thus, in the experiment design, the whole system human-VR technology has to be considered

    A Perceptually Based Comparison of Image Similarity Metrics

    Full text link
    The assessment of how well one image matches another forms a critical component both of models of human visual processing and of many image analysis systems. Two of the most commonly used norms for quantifying image similarity are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric, better than the other, captures the perceptual notion of image similarity. This can be used to derive inferences regarding similarity criteria the human visual system uses, as well as to evaluate and design metrics for use in image-analysis applications. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created by vector quantization. In both conditions the participants showed a small but consistent preference for images matched with the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity

    A reduced-reference perceptual image and video quality metric based on edge preservation

    Get PDF
    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence-prior to compression and transmission-is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric. © 2012 Martini et al

    BiofilmQuant: A Computer-Assisted Tool for Dental Biofilm Quantification

    Full text link
    Dental biofilm is the deposition of microbial material over a tooth substratum. Several methods have recently been reported in the literature for biofilm quantification; however, at best they provide a barely automated solution requiring significant input needed from the human expert. On the contrary, state-of-the-art automatic biofilm methods fail to make their way into clinical practice because of the lack of effective mechanism to incorporate human input to handle praxis or misclassified regions. Manual delineation, the current gold standard, is time consuming and subject to expert bias. In this paper, we introduce a new semi-automated software tool, BiofilmQuant, for dental biofilm quantification in quantitative light-induced fluorescence (QLF) images. The software uses a robust statistical modeling approach to automatically segment the QLF image into three classes (background, biofilm, and tooth substratum) based on the training data. This initial segmentation has shown a high degree of consistency and precision on more than 200 test QLF dental scans. Further, the proposed software provides the clinicians full control to fix any misclassified areas using a single click. In addition, BiofilmQuant also provides a complete solution for the longitudinal quantitative analysis of biofilm of the full set of teeth, providing greater ease of usability.Comment: 4 pages, 4 figures, 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2014

    Video Quality Metrics

    Get PDF

    Mapping the spatiotemporal dynamics of calcium signaling in cellular neural networks using optical flow

    Get PDF
    An optical flow gradient algorithm was applied to spontaneously forming net- works of neurons and glia in culture imaged by fluorescence optical microscopy in order to map functional calcium signaling with single pixel resolution. Optical flow estimates the direction and speed of motion of objects in an image between subsequent frames in a recorded digital sequence of images (i.e. a movie). Computed vector field outputs by the algorithm were able to track the spatiotemporal dynamics of calcium signaling pat- terns. We begin by briefly reviewing the mathematics of the optical flow algorithm, and then describe how to solve for the displacement vectors and how to measure their reliability. We then compare computed flow vectors with manually estimated vectors for the progression of a calcium signal recorded from representative astrocyte cultures. Finally, we applied the algorithm to preparations of primary astrocytes and hippocampal neurons and to the rMC-1 Muller glial cell line in order to illustrate the capability of the algorithm for capturing different types of spatiotemporal calcium activity. We discuss the imaging requirements, parameter selection and threshold selection for reliable measurements, and offer perspectives on uses of the vector data.Comment: 23 pages, 5 figures. Peer reviewed accepted version in press in Annals of Biomedical Engineerin

    Kuvanlaatukokemuksen arvionnin instrumentit

    Get PDF
    This dissertation describes the instruments available for image quality evaluation, develops new methods for subjective image quality evaluation and provides image and video databases for the assessment and development of image quality assessment (IQA) algorithms. The contributions of the thesis are based on six original publications. The first publication introduced the VQone toolbox for subjective image quality evaluation. It created a platform for free-form experimentation with standardized image quality methods and was the foundation for later studies. The second publication focused on the dilemma of reference in subjective experiments by proposing a new method for image quality evaluation: the absolute category rating with dynamic reference (ACR-DR). The third publication presented a database (CID2013) in which 480 images were evaluated by 188 observers using the ACR-DR method proposed in the prior publication. Providing databases of image files along with their quality ratings is essential in the field of IQA algorithm development. The fourth publication introduced a video database (CVD2014) based on having 210 observers rate 234 video clips. The temporal aspect of the stimuli creates peculiar artifacts and degradations, as well as challenges to experimental design and video quality assessment (VQA) algorithms. When the CID2013 and CVD2014 databases were published, most state-of-the-art I/VQAs had been trained on and tested against databases created by degrading an original image or video with a single distortion at a time. The novel aspect of CID2013 and CVD2014 was that they consisted of multiple concurrent distortions. To facilitate communication and understanding among professionals in various fields of image quality as well as among non-professionals, an attribute lexicon of image quality, the image quality wheel, was presented in the fifth publication of this thesis. Reference wheels and terminology lexicons have a long tradition in sensory evaluation contexts, such as taste experience studies, where they are used to facilitate communication among interested stakeholders; however, such an approach has not been common in visual experience domains, especially in studies on image quality. The sixth publication examined how the free descriptions given by the observers influenced the ratings of the images. Understanding how various elements, such as perceived sharpness and naturalness, affect subjective image quality can help to understand the decision-making processes behind image quality evaluation. Knowing the impact of each preferential attribute can then be used for I/VQA algorithm development; certain I/VQA algorithms already incorporate low-level human visual system (HVS) models in their algorithms.VÀitöskirja tarkastelee ja kehittÀÀ uusia kuvanlaadun arvioinnin menetelmiÀ, sekÀ tarjoaa kuva- ja videotietokantoja kuvanlaadun arviointialgoritmien (IQA) testaamiseen ja kehittÀmiseen. Se, mikÀ koetaan kauniina ja miellyttÀvÀnÀ, on psykologisesti kiinnostava kysymys. TyöllÀ on myös merkitystÀ teollisuuteen kameroiden kuvanlaadun kehittÀmisessÀ. VÀitöskirja sisÀltÀÀ kuusi julkaisua, joissa tarkastellaan aihetta eri nÀkökulmista. I. julkaisussa kehitettiin sovellus kerÀÀmÀÀn ihmisten antamia arvioita esitetyistÀ kuvista tutkijoiden vapaaseen kÀyttöön. Se antoi mahdollisuuden testata standardoituja kuvanlaadun arviointiin kehitettyjÀ menetelmiÀ ja kehittÀÀ niiden pohjalta myös uusia menetelmiÀ luoden perustan myöhemmille tutkimuksille. II. julkaisussa kehitettiin uusi kuvanlaadun arviointimenetelmÀ. MenetelmÀ hyödyntÀÀ sarjallista kuvien esitystapaa, jolla muodostettiin henkilöille mielikuva kuvien laatuvaihtelusta ennen varsinaista arviointia. TÀmÀn todettiin vÀhentÀvÀn tulosten hajontaa ja erottelevan pienempiÀ kuvanlaatueroja. III. julkaisussa kuvaillaan tietokanta, jossa on 188 henkilön 480 kuvasta antamat laatuarviot ja niihin liittyvÀt kuvatiedostot. Tietokannat ovat arvokas työkalu pyrittÀessÀ kehittÀmÀÀn algoritmeja kuvanlaadun automaattiseen arvosteluun. NiitÀ tarvitaan mm. opetusmateriaalina tekoÀlyyn pohjautuvien algoritmien kehityksessÀ sekÀ vertailtaessa eri algoritmien suorituskykyÀ toisiinsa. MitÀ paremmin algoritmin tuottama ennuste korreloi ihmisten antamiin laatuarvioihin, sen parempi suorituskyky sillÀ voidaan sanoa olevan. IV. julkaisussa esitellÀÀn tietokanta, jossa on 210 henkilön 234 videoleikkeestÀ tekemÀt laatuarviot ja niihin liittyvÀt videotiedostot. Ajallisen ulottuvuuden vuoksi videoÀrsykkeiden virheet ovat erilaisia kuin kuvissa, mikÀ tuo omat haasteensa videoiden laatua arvioiville algoritmeille (VQA). Aikaisempien tietokantojen Àrsykkeet on muodostettu esimerkiksi sumentamalla yksittÀistÀ kuvaa asteittain, jolloin ne sisÀltÀvÀt vain yksiulotteisia vÀÀristymiÀ. Nyt esitetyt tietokannat poikkeavat aikaisemmista ja sisÀltÀvÀt useita samanaikaisia vÀÀristymistÀ, joiden interaktio kuvanlaadulle voi olla merkittÀvÀÀ. V. julkaisussa esitellÀÀn kuvanlaatuympyrÀ (image quality wheel). Se on kuvanlaadun kÀsitteiden sanasto, joka on kerÀtty analysoimalla 146 henkilön tuottamat 39 415 kuvanlaadun sanallista kuvausta. Sanastoilla on pitkÀt perinteet aistinvaraisen arvioinnin tutkimusperinteessÀ, mutta niitÀ ei ole aikaisemmin kehitetty kuvanlaadulle. VI. tutkimuksessa tutkittiin, kuinka arvioitsijoiden antamat kÀsitteet vaikuttavat kuvien laadun arviointiin. Esimerkiksi kuvien arvioitu terÀvyys tai luonnollisuus auttaa ymmÀrtÀmÀÀn laadunarvioinnin taustalla olevia pÀÀtöksentekoprosesseja. Tietoa voidaan kÀyttÀÀ esimerkiksi kuvan- ja videonlaadun arviointialgoritmien (I/VQA) kehitystyössÀ

    Mathias, Harry

    Get PDF
    San Francisco State University, Radio, Television, Film, BA, 1968 San Francisco State University, Creative Arts Interdisciplinary Studies with emphasis Film Production, Broadcast TV Production, Broadcast Engineering, Drama, and Computer Graphic Video Imaging Systems Design, MA,1974,https://scholarworks.sjsu.edu/erfa_bios/1359/thumbnail.jp

    I'm sorry to say, but your understanding of image processing fundamentals is absolutely wrong

    Full text link
    The ongoing discussion whether modern vision systems have to be viewed as visually-enabled cognitive systems or cognitively-enabled vision systems is groundless, because perceptual and cognitive faculties of vision are separate components of human (and consequently, artificial) information processing system modeling.Comment: To be published as chapter 5 in "Frontiers in Brain, Vision and AI", I-TECH Publisher, Viena, 200
    • 

    corecore