55 research outputs found

    A Novel Gesture-based CAPTCHA Design for Smart Devices

    Get PDF
    CAPTCHAs have been widely used in Web applications to prevent service abuse. With the evolution of computing environment from desktop computing to ubiquitous computing, more and more users are accessing Web applications on smart devices where touch based interactions are dominant. However, the majority of CAPTCHAs are designed for use on computers and laptops which do not reflect the shift of interaction style very well. In this paper, we propose a novel CAPTCHA design to utilise the convenience of touch interface while retaining the needed security. This is achieved through using a hybrid challenge to take advantages of human’s cognitive abilities. A prototype is also developed and found to be more user friendly than conventional CAPTCHAs in the preliminary user acceptance test

    CAPTCHA Types and Breaking Techniques: Design Issues, Challenges, and Future Research Directions

    Full text link
    The proliferation of the Internet and mobile devices has resulted in malicious bots access to genuine resources and data. Bots may instigate phishing, unauthorized access, denial-of-service, and spoofing attacks to mention a few. Authentication and testing mechanisms to verify the end-users and prohibit malicious programs from infiltrating the services and data are strong defense systems against malicious bots. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is an authentication process to confirm that the user is a human hence, access is granted. This paper provides an in-depth survey on CAPTCHAs and focuses on two main things: (1) a detailed discussion on various CAPTCHA types along with their advantages, disadvantages, and design recommendations, and (2) an in-depth analysis of different CAPTCHA breaking techniques. The survey is based on over two hundred studies on the subject matter conducted since 2003 to date. The analysis reinforces the need to design more attack-resistant CAPTCHAs while keeping their usability intact. The paper also highlights the design challenges and open issues related to CAPTCHAs. Furthermore, it also provides useful recommendations for breaking CAPTCHAs

    Mothers\u27 Adaptation to Caring for a New Baby

    Get PDF
    To date, most research on parents\u27 adjustment after adding a new baby to their family unit has focused on mothers\u27 initial transition to parenthood. This past research has examined changes in mothers\u27 marital satisfaction and perceived well-being across the transition, and has compared their prenatal expectations to their postnatal experiences. This project assessed first-time and experienced mothers\u27 stress and satisfaction associated with parenting, their adjustment to competing demands, and their perceived well-being longitudinally before and after the birth of a baby. Additionally, how maternal and child-related variables influenced the trajectory of mothers\u27 postnatal adaptation was assessed. These variables included mothers\u27 age, their education level, their prenatal expectations and postnatal experiences concerning shared infant care, their satisfaction with the division of infant caregiving, and their perceptions of their infant\u27s temperament. Mothers (N = 136) completed an online survey during their third trimester and additional online surveys when their baby was approximately 2, 4, 6, and 8 weeks old.;First-time mothers prenatally expected a more equal division of infant caregiving between themselves and their partners than did experienced mothers. Both first-time and experienced mothers reported less assistance from their partners than they had prenatally expected. Additionally, they experienced almost twice as many violated expectations than met expectations. Growth curve modeling revealed that a cubic function of time best fit the trajectory of mothers\u27 postnatal parenting satisfaction. Mothers reported less parenting satisfaction at 4 weeks, compared to 2 and 6 weeks, and reported stability in their satisfaction between 6 and 8 weeks. A quadratic function of time best fit the trajectories of mothers\u27 postnatal parenting stress and adjustment to the demands of their baby. Mothers reported more stress and difficulty adjusting to their baby\u27s demands at 4 and 6 weeks, compared to 2 and 8 weeks. A linear function of time best fit the trajectories of mothers\u27 adjustment to home demands, generalized state anxiety, and depressive symptoms. Mothers reported less difficulty meeting home demands, less generalized anxiety, and fewer depressive symptoms across the postnatal period. Mothers\u27 violated expectations were associated with level differences in all aspects of mothers\u27 postnatal adaptation except their adjustment to home demands. Specifically, more violated expectations, in number or in magnitude, were associated with poorer postnatal adaptation. Mothers\u27 violated expectations were not associated with the slope of mothers\u27 postnatal adaptation trajectories. Exploratory models revealed that other maternal and child-related variables also impacted the level and slope of mothers\u27 postnatal adaptation.;Overall, first-time and experienced mothers were more similar than different in regards to their postnatal adaptation. This study suggests that prior findings concerning adults\u27 initial transition to parenthood may also apply to adults during each addition of a new baby into the family unit. Additionally, mothers who reported less of a mismatch between their expectations and experiences concerning shared infant care had fewer issues adapting the postnatal period. Thus, methods to increase the assistance mothers receive from their partner should be sought. Limitations of this study and suggestions for future research are also discussed

    Enhancing Online Security with Image-based Captchas

    Get PDF
    Given the data loss, productivity, and financial risks posed by security breaches, there is a great need to protect online systems from automated attacks. Completely Automated Public Turing Tests to Tell Computers and Humans Apart, known as CAPTCHAs, are commonly used as one layer in providing online security. These tests are intended to be easily solvable by legitimate human users while being challenging for automated attackers to successfully complete. Traditionally, CAPTCHAs have asked users to perform tasks based on text recognition or categorization of discrete images to prove whether or not they are legitimate human users. Over time, the efficacy of these CAPTCHAs has been eroded by improved optical character recognition, image classification, and machine learning techniques that can accurately solve many CAPTCHAs at rates approaching those of humans. These CAPTCHAs can also be difficult to complete using the touch-based input methods found on widely used tablets and smartphones.;This research proposes the design of CAPTCHAs that address the shortcomings of existing implementations. These CAPTCHAs require users to perform different image-based tasks including face detection, face recognition, multimodal biometrics recognition, and object recognition to prove they are human. These are tasks that humans excel at but which remain difficult for computers to complete successfully. They can also be readily performed using click- or touch-based input methods, facilitating their use on both traditional computers and mobile devices.;Several strategies are utilized by the CAPTCHAs developed in this research to enable high human success rates while ensuring negligible automated attack success rates. One such technique, used by fgCAPTCHA, employs image quality metrics and face detection algorithms to calculate a fitness value representing the simulated performance of human users and automated attackers, respectively, at solving each generated CAPTCHA image. A genetic learning algorithm uses these fitness values to determine customized generation parameters for each CAPTCHA image. Other approaches, including gradient descent learning, artificial immune systems, and multi-stage performance-based filtering processes, are also proposed in this research to optimize the generated CAPTCHA images.;An extensive RESTful web service-based evaluation platform was developed to facilitate the testing and analysis of the CAPTCHAs developed in this research. Users recorded over 180,000 attempts at solving these CAPTCHAs using a variety of devices. The results show the designs created in this research offer high human success rates, up to 94.6\% in the case of aiCAPTCHA, while ensuring resilience against automated attacks

    Human-artificial intelligence approaches for secure analysis in CAPTCHA codes

    Get PDF
    CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) has long been used to keep automated bots from misusing web services by leveraging human-artificial intelligence (HAI) interactions to distinguish whether the user is a human or a computer program. Various CAPTCHA schemes have been proposed over the years, principally to increase usability and security against emerging bots and hackers performing malicious operations. However, automated attacks have effectively cracked all common conventional schemes, and the majority of present CAPTCHA methods are also vulnerable to human-assisted relay attacks. Invisible reCAPTCHA and some approaches have not yet been cracked. However, with the introduction of fourth-generation bots accurately mimicking human behavior, a secure CAPTCHA would be hardly designed without additional special devices. Almost all cognitive-based CAPTCHAs with sensor support have not yet been compromised by automated attacks. However, they are still compromised to human-assisted relay attacks due to having a limited number of challenges and can be only solved using trusted devices. Obviously, cognitive-based CAPTCHA schemes have an advantage over other schemes in the race against security attacks. In this study, as a strong starting point for creating future secure and usable CAPTCHA schemes, we have offered an overview analysis of HAI between computer users and computers under the security aspects of open problems, difficulties, and opportunities of current CAPTCHA schemes.Web of Science20221art. no.

    Avatar captcha : telling computers and humans apart via face classification and mouse dynamics.

    Get PDF
    Bots are malicious, automated computer programs that execute malicious scripts and predefined functions on an affected computer. They pose cybersecurity threats and are one of the most sophisticated and common types of cybercrime tools today. They spread viruses, generate spam, steal personal sensitive information, rig online polls and commit other types of online crime and fraud. They sneak into unprotected systems through the Internet by seeking vulnerable entry points. They access the system’s resources like a human user does. Now the question arises how do we counter this? How do we prevent bots and on the other hand allow human users to access the system resources? One solution is by designing a CAPTCHA (Completely Automated Public Turing Tests to tell Computers and Humans Apart), a program that can generate and grade tests that most humans can pass but computers cannot. It is used as a tool to distinguish humans from malicious bots. They are a class of Human Interactive Proofs (HIPs) meant to be easily solvable by humans and economically infeasible for computers. Text CAPTCHAs are very popular and commonly used. For each challenge, they generate a sequence of alphabets by distorting standard fonts, requesting users to identify them and type them out. However, they are vulnerable to character segmentation attacks by bots, English language dependent and are increasingly becoming too complex for people to solve. A solution to this is to design Image CAPTCHAs that use images instead of text and require users to identify certain images to solve the challenges. They are user-friendly and convenient for human users and a much more challenging problem for bots to solve. In today’s Internet world the role of user profiling or user identification has gained a lot of significance. Identity thefts, etc. can be prevented by providing authorized access to resources. To achieve timely response to a security breach frequent user verification is needed. However, this process must be passive, transparent and non-obtrusive. In order for such a system to be practical it must be accurate, efficient and difficult to forge. Behavioral biometric systems are usually less prominent however, they provide numerous and significant advantages over traditional biometric systems. Collection of behavior data is non-obtrusive and cost-effective as it requires no special hardware. While these systems are not unique enough to provide reliable human identification, they have shown to be highly accurate in identity verification. In accomplishing everyday tasks, human beings use different styles, strategies, apply unique skills and knowledge, etc. These define the behavioral traits of the user. Behavioral biometrics attempts to quantify these traits to profile users and establish their identity. Human computer interaction (HCI)-based biometrics comprise of interaction strategies and styles between a human and a computer. These unique user traits are quantified to build profiles for identification. A specific category of HCI-based biometrics is based on recording human interactions with mouse as the input device and is known as Mouse Dynamics. By monitoring the mouse usage activities produced by a user during interaction with the GUI, a unique profile can be created for that user that can help identify him/her. Mouse-based verification approaches do not record sensitive user credentials like usernames and passwords. Thus, they avoid privacy issues. An image CAPTCHA is proposed that incorporates Mouse Dynamics to help fortify it. It displays random images obtained from Yahoo’s Flickr. To solve the challenge the user must identify and select a certain class of images. Two theme-based challenges have been designed. They are Avatar CAPTCHA and Zoo CAPTCHA. The former displays human and avatar faces whereas the latter displays different animal species. In addition to the dynamically selected images, while attempting to solve the CAPTCHA, the way each user interacts with the mouse i.e. mouse clicks, mouse movements, mouse cursor screen co-ordinates, etc. are recorded nonobtrusively at regular time intervals. These recorded mouse movements constitute the Mouse Dynamics Signature (MDS) of the user. This MDS provides an additional secure technique to segregate humans from bots. The security of the CAPTCHA is tested by an adversary executing a mouse bot attempting to solve the CAPTCHA challenges

    Image Understanding for Automatic Human and Machine Separation.

    Get PDF
    PhDThe research presented in this thesis aims to extend the capabilities of human interaction proofs in order to improve security in web applications and services. The research focuses on developing a more robust and efficient Completely Automated Public Turing test to tell Computers and Human Apart (CAPTCHA) to increase the gap between human recognition and machine recognition. Two main novel approaches are presented, each one of them targeting a different area of human and machine recognition: a character recognition test, and an image recognition test. Along with the novel approaches, a categorisation for the available CAPTCHA methods is also introduced. The character recognition CAPTCHA is based on the creation of depth perception by using shadows to represent characters. The characters are created by the imaginary shadows produced by a light source, using as a basis the gestalt principle that human beings can perceive whole forms instead of just a collection of simple lines and curves. This approach was developed in two stages: firstly, two dimensional characters, and secondly three-dimensional character models. The image recognition CAPTCHA is based on the creation of cartoons out of faces. The faces used belong to people in the entertainment business, politicians, and sportsmen. The principal basis of this approach is that face perception is a cognitive process that humans perform easily and with a high rate of success. The process involves the use of face morphing techniques to distort the faces into cartoons, allowing the resulting image to be more robust against machine recognition. Exhaustive tests on both approaches using OCR software, SIFT image recognition, and face recognition software show an improvement in human recognition rate, whilst preventing robots break through the tests

    Sensing and awareness of 360º immersive videos on the move

    Get PDF
    Tese de mestrado em Engenharia Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2013Ao apelar a vários sentidos e transmitir um conjunto muito rico de informação, o vídeo tem o potencial para causar um forte impacto emocional nos espectadores, assim como para a criação de uma forte sensação de presença e ligação com o vídeo. Estas potencialidades podem ser estendidas através de percepção multimídia, e da flexibilidade da mobilidade. Com a popularidade dos dispositivos móveis e a crescente variedade de sensores e actuadores que estes incluem, existe cada vez mais potencial para a captura e visualização de vídeo em 360º enriquecido com informação extra (metadados), criando assim as condições para proporcionar experiências de visualização de vídeo mais imersivas ao utilizador. Este trabalho explora o potencial imersivo do vídeo em 360º. O problema é abordado num contexto de ambientes móveis, assim como num contexto da interação com ecrãs de maiores dimensões, tirando partido de second screens para interagir com o vídeo. De realçar que, em ambos os casos, o vídeo a ser reproduzido é aumentado com vários tipos de informação. Foram assim concebidas várias funcionalidades para a captura, pesquisa, visualização e navegação de vídeo em 360º. Os resultados confirmaram a existência de vantagens no uso de abordagens multisensoriais como forma de melhorar as características imersivas de um ambiente de vídeo. Foram também identificadas determinadas propriedades e parâmetros que obtêm melhores resultados em determinadas situações. O vídeo permite capturar e apresentar eventos e cenários com grande autenticidade, realismo e impacto emocional. Para além disso, tem-se vindo a tornar cada vez mais pervasivo no quotidiano, sendo os dispositivos pessoais de captura e reprodução, a Internet, as redes sociais, ou a iTV exemplos de meios através dos quais o vídeo chega até aos utilizadores (Neng & Chambel, 2010; Noronha et al, 2012). Desta forma, a imersão em vídeo tem o potencial para causar um forte impacto emocional nos espectadores, assim como para a criação de uma forte sensação de presença e ligação com o vídeo (Douglas & Hargadon, 2000; Visch et al, 2010). Contudo, no vídeo tradicional a experiência dos espectadores é limitada ao ângulo para o qual a câmara apontava durante a captura do vídeo. A introdução de vídeo em 360º veio ultrapassar essa restrição. Na busca de melhorar ainda mais as capacidades imersivas do vídeo podem ser considerados tópicos como a percepção multimídia e a mobilidade. Os dispositivos móveis têm vindo a tornar-se cada vez mais omnipresentes na sociedade moderna, e, dada a grande variedade de sensores e actuadores que incluem, oferecem um largo espectro de oportunidades de captura e reprodução de vídeo em 360º enriquecido com informação extra (metadados), tendo portanto o potencial para melhorar o paradigma de interação e providenciar suporte a experiências de visualização de vídeo mais ponderosas e imersivas. Contudo, existem desafios relacionados com o design de ambientes eficazes que tirem partido deste potencial de imersão. Ecrãs panorâmicos e CAVEs são exemplos de ambientes que caminham na direção da imersão total e providenciam condições privilegiadas no que toca à reprodução de vídeo imersivo. Porém, não são muito convenientes e, especialmente no caso das CAVEs, não são facilmente acessíveis. Por outro lado, a flexibilidade associada aos dispositivos móveis poderia permitir que os utilizadores tirassem partido dos mesmos usando-os, por exemplo, como uma janela (móvel) para o vídeo no qual estariam imersos. Mais do que isso, seguindo esta abordagem os utilizadores poderiam levar estas experiências de visualização consigo para qualquer lugar. Como second screens, os dispositivos móveis podem ser usados como auxiliares de navegação relativamente aos conteúdos apresentados no ecrã principal (seja este um ecrã panorâmico ou uma CAVE), representando também uma oportunidade para fazer chegar informação adicional ao utilizador, eliminando do ecrã principal informação alheia ao conteúdo base, o que proporciona uma melhor sensação de imersão e flexibilidade. Este trabalho explora o potencial imersivo do vídeo em 360º em ambientes móveis aumentado com vários tipos de informação. Nesse sentido, e estendendo um trabalho anterior (Neng, 2010; Noronha, 2012; Álvares, 2012) que incidiu maioritariamente na dimensão participativa de imersão, a presente abordagem centrou-se na dimensão perceptual de imersão. Neste âmbito, foram concebidas, desenvolvidas e testadas várias funcionalidades, agrupadas numa aplicação de visualização de vídeo em 360º – Windy Sight Surfers. Considerando a crescente popularidade dos dispositivos móveis na sociedade e as características que os tornam numa oportunidade para melhorar a interação homem-máquina e, mais especificamente, suportar experiências de visualização de vídeo mais imersivas, a aplicação Windy Sight Surfers está fortemente relacionada com ambientes móveis. Considerando as possibilidades de interação que o uso de second screens introduz, foi concebida uma componente do Windy Sight Surfers relacionada com a interação com ecrãs de maiores dimensões. Os vídeos utilizados no Windy Sight Surfers são vídeos em 360º, aumentados com uma série de informações registadas a partir do Windy Sight Surfers durante a sua captura. Isto é, enquanto a câmara captura os vídeos, a aplicação regista informação adicional – metadados – obtida a partir de vários sensores do dispositivo, que complementa e enriquece os vídeos. Nomeadamente, são capturadas as coordenadas geográficas e a velocidade de deslocamento a partir do GPS, a orientação do utilizador a partir da bússola digital, os valores relativos às forças-G associadas ao dispositivo através do acelerómetro, e são recolhidas as condições atmosféricas relativas ao estado do tempo através de um serviço web. Quando capturados, os vídeos, assim como os seus metadados, podem ser submetidos para o sistema. Uma vez capturados e submetidos, os vídeos podem ser pesquisados através do mais tradicional conjunto de palavras chave, de filtros relacionados com a natureza da aplicação (ex. velocidade, período do dia, condições atmosféricas), ou através de um mapa, o que introduz uma componente geográfica ao processo de pesquisa. Os resultados podem ser apresentados numa convencional lista, no formato de uma cover-flow, ou através do mapa. No que respeita à visualização dos vídeos, estes são mapeados em torno de um cilindro, que permite representar a vista dos 360º e transmitir a sensação de estar parcialmente rodeado pelo vídeo. Uma vez que a visualização de vídeos decorre em dispositivos móveis, os utilizadores podem deslocar continuamente o ângulo de visão do vídeo 360º para a esquerda ou direita ao mover o dispositivo em seu redor, como se o dispositivo se tratasse de uma janela para o vídeo em 360º. Adicionalmente, os utilizadores podem alterar o ângulo de visualização arrastando o dedo pelo vídeo, uma vez que todo o ecrã consiste numa interface deslizante durante a visualização de vídeos em 360º. Foram ainda incorporadas na aplicação várias funcionalidades que pretendem dar um maior realismo à visualização de vídeos. Nomeadamente, foi desenvolvido um acessório de vento na plataforma Arduino que leva em conta os metadados de cada vídeo para produzir vento e assim dar uma sensação mais realista do vento e da velocidade do deslocamento durante a visualização dos vídeos. De referir que o algoritmo implementado leva em conta não só a velocidade de deslocamento, como também o estado do tempo em termos de vento (força e orientação) aquando da captura do vídeo, e a orientação do utilizador de acordo com o ângulo do vídeo a ser visualizado durante a reprodução do vídeo. Considerando a componente áudio dos vídeos, neste sistema, o áudio de cada vídeo é mapeado num espaço sonoro tridimensional, que pode ser reproduzido num par de auscultadores estéreo. Neste espaço sonoro, a posição das fontes sonoras está associada ao ângulo frontal do vídeo e, como tal, muda de acordo com o ângulo do vídeo a ser visualizado. Isto é, se o utilizador estiver a visualizar o ângulo frontal do vídeo, as fontes sonoras estarão localizadas diante da cabeça do utilizador; se o utilizador estiver a visualizar o ângulo traseiro do vídeo, as fontes sonoras estarão localizadas por de trás da cabeça do utilizador. Uma vez que os vídeos têm 360º, a posição das fontes sonoras varia em torno de uma circunferência à volta da cabeça do utilizador, sendo o intuito o de dar uma orientação adicional no vídeo que está a ser visualizado. Para aumentar a sensação de movimento através do áudio, foi explorado o Efeito de Doppler. Este efeito pode ser descrito como a alteração na frequência observada de uma onda, ocorrendo quando a fonte ou o observador se encontram em movimento entre si. Devido ao facto deste efeito ser associado à noção de movimento, foi conduzida uma experiência com o intuito de analisar se o uso controlado do Efeito de Doppler tem o potencial de aumentar a sensação de movimento durante a visualização dos vídeos. Para isso, foi adicionada uma segunda camada sonora cuja função é reproduzir o Efeito de Doppler ciclicamente e de forma controlada. Esta reprodução foi relacionada com a velocidade de deslocamento do vídeo de acordo seguinte proporção: quanto maior a velocidade, maior será a frequência com que este efeito é reproduzido. Estas funcionalidades são relativas à procura de melhorar as capacidades imersivas do sistema através da estimulação sensorial dos utilizadores. Adicionalmente, o Windy Sight Surfers inclui um conjunto de funcionalidades cujo objectivo se centra em melhorar as capacidades imersivas do sistema ao providenciar ao utilizador informações que consciencializem o utilizador do contexto do vídeo, permitindo assim que este se aperceba melhor do que se está a passar no vídeo. Mais especificamente, estas funcionalidades estão dispostas numa camada por cima do vídeo e disponibilizam informações como a velocidade atual, a orientação do ângulo do vídeo a ser observado, ou a força-G instantânea. A acrescentar que as diferentes funcionalidades se dividem numa categoria relativa a informação que é disponibilizada permanentemente durante a reprodução de vídeos, e numa segunda categoria (complementar da primeira) relativa a informação que é disponibilizada momentaneamente, sendo portanto relativa a determinadas porções do vídeo. Procurando conceber uma experiência mais envolvente para o utilizador, foi incorporado um reconhecedor emocional baseado em reconhecimento de expressões faciais no Windy Sight Surfers. Desta forma, as expressões faciais dos utilizadores são analisadas durante a reprodução de vídeos, sendo os resultados desta análise usados em diferentes funcionalidades da aplicação. Presentemente, a informação emocional tem três aplicações no ambiente desenvolvido, sendo usada em: funcionalidades de catalogação e pesquisa de vídeos; funcionalidades que influenciam o controlo de fluxo da aplicação; e na avaliação do próprio sistema. Considerando o contexto do projeto de investigação ImTV (url-ImTV), e com o intuito de tornar a aplicação o mais flexível possível, o Windy Sight Surfers tem uma componente second screen, permitindo a interação com ecrãs mais amplos, como por exemplo televisões. Desta forma, é possível utilizar os dois dipositivos em conjunto por forma a retirar o melhor proveito de cada um com o objectivo de aumentar as capacidades imersivas do sistema. Neste contexto, os vídeos passam a ser reproduzidos no ecrã conectado, ao passo que a aplicação móvel assume as funcionalidades de controlar o conteúdo apresentado no ecrã conectado e disponibilizar um conjunto de informações adicionais, tais como um minimapa, onde apresenta uma projeção planar dos 360º do vídeo, e um mapa da zona geográfica associada ao vídeo onde se representa o percurso em visualização em tempo real e percursos adicionais que sejam respeitantes a vídeos associados à mesma zona geográfica do vídeo a ser visualizado no momento. Foi efectuada uma avaliação de usabilidade com utilizadores, tendo como base o questionário USE e o Self-Assessment Manikin (SAM) acoplado de dois parâmetros adicionais relativos a presença e realismo. Com base na observação durante a realização de tarefas por parte dos utilizadores, foram realizadas entrevistas onde se procurou obter comentários, sugestões ou preocupações sobre as funcionalidades testadas. Adicionalmente, a ferramenta de avaliação emocional desenvolvida foi utilizada de forma a registar quais as emoções mais prevalentes durante a utilização da aplicação. Por fim, as potencialidades imersivas globais do Windy Sight Surfers foram avaliadas através da aplicação do Immersive Tendencies Questionnaire (ITQ) e de uma versão adaptada do Presence Questionnaire (PQ). Os resultados confirmaram a existência de vantagens no uso de abordagens multisensoriais como forma de melhorar as características imersivas de um ambiente de vídeo. Para além disso, foram identificadas determinadas propriedades e parâmetros que obtêm melhores resultados e são mais satisfatórios em determinadas condições, podendo assim estes resultados servir como diretrizes para futuros ambientes relacionados com vídeo imersivo.By appealing to several senses and conveying very rich information, video has the potential for a strong emotional impact on viewers, greatly influencing their sense of presence and engagement. This potential may be extended even further with multimedia sensing and the flexibility of mobility. Mobile devices are commonly used and increasingly incorporating a wide range of sensors and actuators with the potential to capture and display 360º video and metadata, thus supporting more powerful and immersive video user experiences. This work was carried out in the context of the ImTV research project (url-ImTV), and explores the immersion potential of 360º video. The matter is approached in a mobile environment context, and in a context of interaction with wider screens, using second screens in order to interact with video. It must be emphasized that, in both situations, the videos are augmented with several types of information. Therefore, several functionalities were designed regarding the capture, search, visualization and navigation of 360º video. Results confirmed advantages in using a multisensory approach as a means to increase immersion in a video environment. Furthermore, specific properties and parameters that worked better in different conditions have been identified, thus enabling these results to serve as guidelines for future environments related to immersive video

    Industry attitudes and behaviour towards web accessibility in general and age-related change in particular and the validation of a virtual third-age simulator for web accessibility training for students and professionals

    Get PDF
    While the need for web accessibility for people with disabilities is widely accepted, the same visibility does not apply to the accessibility needs of older adults. This research initially explored developer behaviour in terms of how they presented accessibility on their websites as well as their own accessibility practices in terms of presentation of accessibility statements, the mention of accessibility as a selling point to potential clients and homepage accessibility of company websites. Following from this starting point the research focused in on web accessibility for ageing in particular. A questionnaire was developed to explore the differences between developer views of general accessibility and accessibility for older people. The questionnaire findings indicated that ageing is not seen as an accessibility issue by a majority of developers. Awareness of ageing accessibility documentation was also very low, highlighting the need for raising awareness of accessibility practices for ageing. Current age-related documentation developed by the Web Accessibility Initiative was then examined and critiqued. The findings show a tension between the machine-centric Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and the needs of older people. Examination of guidelines when compared to research-derived findings reveal that the Assistive Technology (AT) centric structure of the documentation does not appropriately highlight accessibility practices in a context that matches the observed behaviour of older people. The documentation also fails to appropriately address the psycho-social ramifications of how older people choose to interact with technology as well as how they identify themselves in relation to any conditions they have which may be considered disabling. The need for a novel, engaging and awareness-raising tool resulted in the development of what is essentially a "Virtual third-age simulator". This ageing simulator is the first to combine multiple impairments in an active simulation and uses eye-tracking technology to increase the fidelity of conditions resulting in partial sightedness. It also allows for developers to view their own web content in addition to the lessons provided using the simulations presented in the software. The simulator was then validated in terms of its ability to raise awareness as well as its ability to affect web industry professionals' intentions towards accessible practices that benefit older people

    A goal-oriented user interface for personalized semantic search

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 2006.Includes bibliographical references (v. 2, leaves 280-288).Users have high-level goals when they browse the Web or perform searches. However, the two primary user interfaces positioned between users and the Web, Web browsers and search engines, have very little interest in users' goals. Present-day Web browsers provide only a thin interface between users and the Web, and present-day search engines rely solely on keyword matching. This thesis leverages large knowledge bases of semantic information to provide users with a goal-oriented Web browsing experience. By understanding the meaning of Web pages and search queries, this thesis demonstrates how Web browsers and search engines can proactively suggest content and services to users that are both contextually relevant and personalized. This thesis presents (1) Creo, a Programming by Example system that allows users to teach their computers how to automate interactions with their favorite Web sites by providing a single demonstration, (2) Miro, a Data Detector that matches the content of a Web page to high-level user goals, and allows users to perform semantic searches, and (3) Adeo, an application that streamlines browsing the Web on mobile devices, allowing users to complete actions with a minimal amount of input and output.(cont.) An evaluation with 34 subjects found that they were more effective at completing tasks when using these applications, and that the subjects would use these applications if they had access to them. Beyond these three user interfaces, this thesis also explores a number of underlying issues, including (1) automatically providing semantics to unstructured text, (2) building robust applications on top of messy knowledge bases, (3) leveraging surrounding context to disambiguate concepts that have multiple meanings, and (4) learning new knowledge by reading the Web.by Alexander James Faaborg.S.M
    corecore