396 research outputs found

    Segment Based Indexing Technique for Video Data File

    Get PDF
    AbstractA video is an effective tool to exchange the information in the structure of showing the brief text message due to the advance developed technology. Video capturing is effortless process but the related video retrieval is the difficult process, for that process the videos must be indexed. Retrieval is the method that retrieved a video using a user query. The query will be image or texts depend upon the query result output system that returned a particular video or image based on that query. In this project we create a indexing for video file by using segment based indexing technique. Here video will be divided into a hierarchy which is in storyboards of film making. For instance, a hierarchical based video search is composed into multi stage abstraction for assist the users to locate the specific video segments/frames logically. This paper brings out the reduced bandwidth and reduced delays the video through the network of searching and reviewing. Experimental results verify this

    An Outlook into the Future of Egocentric Vision

    Full text link
    What will the future be? We wonder! In this survey, we explore the gap between current research in egocentric vision and the ever-anticipated future, where wearable computing, with outward facing cameras and digital overlays, is expected to be integrated in our every day lives. To understand this gap, the article starts by envisaging the future through character-based stories, showcasing through examples the limitations of current technology. We then provide a mapping between this future and previously defined research tasks. For each task, we survey its seminal works, current state-of-the-art methodologies and available datasets, then reflect on shortcomings that limit its applicability to future research. Note that this survey focuses on software models for egocentric vision, independent of any specific hardware. The paper concludes with recommendations for areas of immediate explorations so as to unlock our path to the future always-on, personalised and life-enhancing egocentric vision.Comment: We invite comments, suggestions and corrections here: https://openreview.net/forum?id=V3974SUk1

    Towards Everyday Virtual Reality through Eye Tracking

    Get PDF
    Durch Entwicklungen in den Bereichen Computergrafik, Hardwaretechnologie, Perception Engineering und Mensch-Computer Interaktion, werden Virtual Reality und virtuelle Umgebungen immer mehr in unser tägliches Leben integriert. Head-Mounted Displays werden jedoch im Vergleich zu anderen mobilen Geräten, wie Smartphones und Smartwatches, noch nicht so häufig genutzt. Mit zunehmender Nutzung dieser Technologie und der Gewöhnung von Menschen an virtuelle Anwendungsszenarien ist es wahrscheinlich, dass in naher Zukunft ein alltägliches Virtual-Reality-Paradigma realisiert wird. Im Hinblick auf die Kombination von alltäglicher Virtual Reality und Head-Mounted-Displays, ist Eye Tracking eine neue Technologie, die es ermöglicht, menschliches Verhalten in Echtzeit und nicht-intrusiv zu messen. Bevor diese Technologien in großem Umfang im Alltag eingesetzt werden können, müssen jedoch noch zahlreiche Aspekte genauer erforscht werden. Zunächst sollten Aufmerksamkeits- und Kognitionsmodelle in Alltagsszenarien genau verstanden werden. Des Weiteren sind Maßnahmen zur Wahrung der Privatsphäre notwendig, da die Augen mit visuellen biometrischen Indikatoren assoziiert sind. Zuletzt sollten anstelle von Studien oder Anwendungen, die sich auf eine begrenzte Anzahl menschlicher Teilnehmer mit relativ homogenen Merkmalen stützen, Protokolle und Anwendungsfälle für eine bessere Zugänglichkeit dieser Technologie von wesentlicher Bedeutung sein. In dieser Arbeit wurde unter Berücksichtigung der oben genannten Punkte ein bedeutender wissenschaftlicher Vorstoß mit drei zentralen Forschungsbeiträgen in Richtung alltäglicher Virtual Reality unternommen. Menschliche visuelle Aufmerksamkeit und Kognition innerhalb von Virtual Reality wurden in zwei unterschiedlichen Bereichen, Bildung und Autofahren, erforscht. Die Forschung im Bildungsbereich konzentrierte sich auf die Auswirkungen verschiedener Manipulationen im Klassenraum auf das menschliche Sehverhalten, während die Forschung im Bereich des Autofahrens auf sicherheitsrelevante Fragen und Blickführung abzielte. Die Nutzerstudien in beiden Bereichen zeigen, dass Blickbewegungen signifikante Implikationen für diese alltäglichen Situationen haben. Der zweite wesentliche Beitrag fokussiert sich auf Privatsphäre bewahrendes Eye Tracking für Blickbewegungsdaten von Head-Mounted Displays. Dies beinhaltet Differential Privacy, welche zeitliche Korrelationen von Blickbewegungssignalen berücksichtigt und Privatsphäre wahrende Blickschätzung durch Verwendung eines auf randomisiertem Encoding basierenden Frameworks, welches Augenreferenzunkte verwendet. Die Ergebnisse beider Arbeiten zeigen, dass die Wahrung der Privatsphäre möglich ist und gleichzeitig der Nutzen in einem akzeptablen Bereich bleibt. Wenngleich es bisher nur wenig Forschung zu diesem Aspekt von Eye Tracking gibt, ist weitere Forschung notwendig, um den alltäglichen Gebrauch von Virtual Reality zu ermöglichen. Als letzter signifikanter Beitrag, wurde ein Blockchain- und Smart Contract-basiertes Protokoll zur Eye Tracking Datenerhebung für Virtual Reality vorgeschlagen, um Virtual Reality besser zugänglich zu machen. Die Ergebnisse liefern wertvolle Erkenntnisse für alltägliche Nutzung von Virtual Reality und treiben den aktuellen Stand der Forschung in mehrere Richtungen voran.With developments in computer graphics, hardware technology, perception engineering, and human-computer interaction, virtual reality and virtual environments are becoming more integrated into our daily lives. Head-mounted displays, however, are still not used as frequently as other mobile devices such as smart phones and watches. With increased usage of this technology and the acclimation of humans to virtual application scenarios, it is possible that in the near future an everyday virtual reality paradigm will be realized. When considering the marriage of everyday virtual reality and head-mounted displays, eye tracking is an emerging technology that helps to assess human behaviors in a real time and non-intrusive way. Still, multiple aspects need to be researched before these technologies become widely available in daily life. Firstly, attention and cognition models in everyday scenarios should be thoroughly understood. Secondly, as eyes are related to visual biometrics, privacy preserving methodologies are necessary. Lastly, instead of studies or applications utilizing limited human participants with relatively homogeneous characteristics, protocols and use-cases for making such technology more accessible should be essential. In this work, taking the aforementioned points into account, a significant scientific push towards everyday virtual reality has been completed with three main research contributions. Human visual attention and cognition have been researched in virtual reality in two different domains, including education and driving. Research in the education domain has focused on the effects of different classroom manipulations on human visual behaviors, whereas research in the driving domain has targeted safety related issues and gaze-guidance. The user studies in both domains show that eye movements offer significant implications for these everyday setups. The second substantial contribution focuses on privacy preserving eye tracking for the eye movement data that is gathered from head-mounted displays. This includes differential privacy, taking temporal correlations of eye movement signals into account, and privacy preserving gaze estimation task by utilizing a randomized encoding-based framework that uses eye landmarks. The results of both works have indicated that privacy considerations are possible by keeping utility in a reasonable range. Even though few works have focused on this aspect of eye tracking until now, more research is necessary to support everyday virtual reality. As a final significant contribution, a blockchain- and smart contract-based eye tracking data collection protocol for virtual reality is proposed to make virtual reality more accessible. The findings present valuable insights for everyday virtual reality and advance the state-of-the-art in several directions

    Gesture in Automatic Discourse Processing

    Get PDF
    Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning.My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract.These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features -- extracted automatically from video -- yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing

    Affective Brain-Computer Interfaces

    Get PDF

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    eXtended Reality for Education and Training

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore