808 research outputs found

    Trends in Advertising: How the Rise in Artificial Intelligence May Influence the Field of Content Strategy

    Get PDF
    Whereas prior research on artificial intelligence has dealt with automation in fields like medicine, engineering, and computer science, this research study seeks to answer the question, “To what extent can AI be creative in the context of content strategy?” To answer this, this study employs content analysis using 16 online news and blog articles from primarily marketing organizations to identify and explain key variables surrounding the relationship between the computer and the creative professional. This study has found that the core belief that AI will play the future role of creative assistant in the context of content strategy is shared among many online marketing publications. As AI becomes increasingly capable of taking on lower level creative tasks such as content organization, ideation, basic copywriting, and photo editing, many believe this will open up more time for content strategy professionals to accomplish more creatively demanding, big picture tasks

    SmartMirror: A Glance into the Future

    Get PDF
    In todays society, information is available to us at a glance through our phones, our laptops, our desktops, and more. But an extra level of interaction is required in order to access the information. As technology grows, technology should grow further and further away from the traditional style of interaction with devices. In the past, information was relayed through paper, then through computers, and in todays day and age, through our phones and multiple other mediums. Technology should become more integrated into our lives - more seamless and more invisible. We hope to push the envelope further, into the future. We propose a new simple way of connecting with your morning newspaper. We present our idea, the SmartMirror, information at a glance. Our system aims to deliver your information quickly and comfortably, with a new modern aesthetic. While modern appliances require input through modules such as keyboards or touch screen, we hope to follow a model that can function purely on voice and gesture. We seek to deliver your information during your morning routine and throughout the day, when taking out your phone is not always possible. This will cater to a larger audience base, as the average consumer nowadays hopes to accomplish tasks with minimal active interaction with their adopted technology. This idea has many future applications, such as integration with new virtual or augmented reality devices, or simplifying consumer personal media sources

    Generating collaborative systems for digital libraries: A model-driven approach

    Get PDF
    This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework

    Personalized face and gesture analysis using hierarchical neural networks

    Full text link
    The video-based computational analyses of human face and gesture signals encompass a myriad of challenging research problems involving computer vision, machine learning and human computer interaction. In this thesis, we focus on the following challenges: a) the classification of hand and body gestures along with the temporal localization of their occurrence in a continuous stream, b) the recognition of facial expressivity levels in people with Parkinson's Disease using multimodal feature representations, c) the prediction of student learning outcomes in intelligent tutoring systems using affect signals, and d) the personalization of machine learning models, which can adapt to subject and group-specific nuances in facial and gestural behavior. Specifically, we first conduct a quantitative comparison of two approaches to the problem of segmenting and classifying gestures on two benchmark gesture datasets: a method that simultaneously segments and classifies gestures versus a cascaded method that performs the tasks sequentially. Second, we introduce a framework that computationally predicts an accurate score for facial expressivity and validate it on a dataset of interview videos of people with Parkinson's disease. Third, based on a unique dataset of videos of students interacting with MathSpring, an intelligent tutoring system, collected by our collaborative research team, we build models to predict learning outcomes from their facial affect signals. Finally, we propose a novel solution to a relatively unexplored area in automatic face and gesture analysis research: personalization of models to individuals and groups. We develop hierarchical Bayesian neural networks to overcome the challenges posed by group or subject-specific variations in face and gesture signals. We successfully validate our formulation on the problems of personalized subject-specific gesture classification, context-specific facial expressivity recognition and student-specific learning outcome prediction. We demonstrate the flexibility of our hierarchical framework by validating the utility of both fully connected and recurrent neural architectures

    Personalization in object-based audio for accessibility : a review of advancements for hearing impaired listeners

    Get PDF
    Hearing loss is widespread and significantly impacts an individual’s ability to engage with broadcast media. Access can be improved through new object-based audio personalization methods. Utilizing the literature on hearing loss and intelligibility this paper develops three dimensions which are evidenced to improve intelligibility: spatial separation, speech to noise ratio and redundancy. These can be personalized, individually or concurrently, using object based audio. A systematic review of all work in object-based audio personalization is then undertaken. These dimensions are utilized to evaluate each project’s approach to personalisation, identifying successful approaches, commercial challenges and the next steps required to ensure continuing improvements to broadcast audio for hard of hearing individuals

    Immersive interconnected virtual and augmented reality : a 5G and IoT perspective

    Get PDF
    Despite remarkable advances, current augmented and virtual reality (AR/VR) applications are a largely individual and local experience. Interconnected AR/VR, where participants can virtually interact across vast distances, remains a distant dream. The great barrier that stands between current technology and such applications is the stringent end-to-end latency requirement, which should not exceed 20 ms in order to avoid motion sickness and other discomforts. Bringing AR/VR to the next level to enable immersive interconnected AR/VR will require significant advances towards 5G ultra-reliable low-latency communication (URLLC) and a Tactile Internet of Things (IoT). In this article, we articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/VR applications. Through the use of IoT sensors and actuators, AR/VR applications will be aware of the environmental and user context, supporting human-centric adaptations of the application logic, and lifelike interactions with the virtual environment. We present potential use cases and the required technological building blocks. For each of them, we delve into the current state of the art and challenges that need to be addressed before the dream of remote AR/VR interaction can become reality

    Verification and Application of Conceptual Model and Security Requirements on Practical DRM Systems in E-Learning

    Get PDF
    The paper represents a verification of a previously developed conceptual model of security related processes in DRM implementation. The applicability of established security requirements in practice is checked as well by comparing these requirements to four real DRM implementations (Microsoft Media DRM, Apple's iTunes, SunnComm Technologies’s MediaMax DRM and First4Internet’s XCP DRM). The exploited weaknesses of these systems resulting from the violation of specific security requirements are explained and the possibilities to avoid the attacks by implementing the requirements in designing step are discussed
    • 

    corecore