1,064 research outputs found

    Improving The Service Design Process: Process Integration, Conflict Reduction And Customer Involvement

    Get PDF
    Service design is the science of creating service experiences based on the customer’s perspective, to make it useful, enjoyable and cost-effective for the customer. Although the field of service design is relatively new, it has been rapidly expanding in research and practice. Most researchers focus on the usefulness of the service, cost efficiency, meeting customers’ needs, or service strategy. However, all service elements can benefit from improving the service design process. Current service design processes are suffering a lack of integration of activities, conflicts in decision-making processes, and exclusion of practitioners’ methods. In prior research, information models were created to integrate the service design process across the enterprise. As an extension, this dissertation introduces Petri Nets to improve the service design process. Petri Nets provide a uniform environment for modeling, analysis, and design of discrete event systems. Petri Nets are used to develop a new service design process that enhances the multidisciplinary approach and includes the practitioner methods. Additionally, this dissertation uses the Lens Model to improve the decision-making mechanism. The Lens Model is to characterize decision-making policy in service design. Research shows that there is a conflict between the designer and the manager in service design decision-making. Single Lens Model systems are designed to capture the decision policy for the service designer and the service manager. A double Lens Model system is used to compare the perspectives. Finally, this research suggests a new role for the customer in the design by applying an Asset-Based approach. Asset-based System Engineering (ABSE) is a recently introduced concept that attempts to synthesize systems around their key assets and strengths. ABSE is developed with as an innovative approach that views customers as a primary asset. Customer integration in the design process is achieved through several new service design tools

    Balancing automation and user control in a home video editing system

    Get PDF
    The context of this PhD project is the area of multimedia content management, in particular interaction with home videos. Nowadays, more and more home videos are produced, shared and edited. Home videos are captured by amateur users, mainly to document their lives. People frequently edit home videos, to select and keep the best parts of their visual memories and to add to them a touch of personal creativity. However, most users find the current products for video editing timeconsuming and sometimes too technical and difficult. One reason of the large amount of time required for editing is the slow accessibility caused by the temporal dimension of videos: a video needs to be played back in order to be watched or edited. Another reason of the limitation of current video editing tools is that they are modelled too much on professional video editing systems, including technical details like frame-by-frame browsing. This thesis aims at making home video editing more efficient and easier for the non-technical, amateur user. To accomplish this goal, an approach was taken characterized by two main guidelines. We designed a semi-automatic tool, and we adopted a user-centered approach. To gain insights on user behaviours and needs related to home video editing, we designed an Internet-based survey, which was answered by 180 home video users. The results of the survey revealed the facts that video editing is done frequently and is seen as a very time-consuming activity. We also found that users with low experience with PCs often consider video editing programs too complex. Although nearly all commercial editing tools are designed for a PC, many of our respondents said to be interested in doing video editing on a TV. We created a novel concept, Edit While Watching, designed to be user-friendly. It requires only a TV set and a remote control, instead of a PC. The video that the user inputs to the system is automatically analyzed and structured in small video segments. The editing operations happen on the basis of these video segments: the user is not aware anymore of the single video frames. After the input video has been analyzed and structured, a first edited version is automatically prepared. Successively, Edit While Watching allows the user to modify and enrich the automatically edited video while watching it. When the user is satisfied, the video can be saved to a DVD or to another storage medium. We performed two iterations of system implementation and use testing to refine our concept. After the first iteration, we discovered that two requirements were insufficiently addressed: to have an overview of the video and to precisely control which video content to keep or to discard. The second version of EditWhileWatching was designed to address these points. It allows the user to visualize the video at three levels of detail: the different chapters (or scenes) of the video, the shots inside one chapter, and the timeline representation of a single shot. Also, the second version allows the users to edit the video at different levels of automation. For example, the user can choose an event in the video (e.g. a child playing with a toy) and just ask the system to automatically include more content related to it. Alternatively, if the user wants more control, he or she can precisely select which content to add to the video. We evaluated the second version of our tool by inviting nine users to edit their own home videos with it. The users judged Edit While Watching as an easy to use and fast application. However, some of them missed the possibility of enriching the video with transitions, music, text and pictures. Our test showed that the requirements of overview on the video and control in the selection of the edited material are better addressed than in the first version. Moreover, the participants were able to select which video portions to keep or to discard in a time close to the playback time of the video. The second version of Edit While Watching exploits different levels of automation. In some editing functions the user only gives an indication about editing a clip, and the system automatically decides the start and end points of the part of the video to be cut. However, there are also editing functions in which the user has complete control on the start and end points of a cut. We wanted to investigate how to balance automation and user control to optimize the perceived ease of use, the perceived control, the objective editing efficiency and the mental effort. To this aim, we implemented three types of editing functions, each type representing a different balance between automation and user control. To compare these three levels, we invited 25 users to perform pre-defined tasks with the three function types. The results showed that the type of functions with the highest level of automation performed worse than the two other types, according to both subjective and objective measurements. The other two types of functions were equally liked. However, some users clearly preferred the functions that allowed faster editing while others preferred the functions that gave full control and a more complete overview. In conclusion, on the basis of this research some design guidelines can be offered for building an easy and efficient video editing application. Such application should automatically structure the video, eliminate the detail about single frames, support a scalable video overview, implement a rich set of editing functionalities, and should be preferably TV-based

    Applying psychological science to the CCTV review process: a review of cognitive and ergonomic literature

    Get PDF
    As CCTV cameras are used more and more often to increase security in communities, police are spending a larger proportion of their resources, including time, in processing CCTV images when investigating crimes that have occurred (Levesley & Martin, 2005; Nichols, 2001). As with all tasks, there are ways to approach this task that will facilitate performance and other approaches that will degrade performance, either by increasing errors or by unnecessarily prolonging the process. A clearer understanding of psychological factors influencing the effectiveness of footage review will facilitate future training in best practice with respect to the review of CCTV footage. The goal of this report is to provide such understanding by reviewing research on footage review, research on related tasks that require similar skills, and experimental laboratory research about the cognitive skills underpinning the task. The report is organised to address five challenges to effectiveness of CCTV review: the effects of the degraded nature of CCTV footage, distractions and interrupts, the length of the task, inappropriate mindset, and variability in people’s abilities and experience. Recommendations for optimising CCTV footage review include (1) doing a cognitive task analysis to increase understanding of the ways in which performance might be limited, (2) exploiting technology advances to maximise the perceptual quality of the footage (3) training people to improve the flexibility of their mindset as they perceive and interpret the images seen, (4) monitoring performance either on an ongoing basis, by using psychophysiological measures of alertness, or periodically, by testing screeners’ ability to find evidence in footage developed for such testing, and (5) evaluating the relevance of possible selection tests to screen effective from ineffective screener

    Cinematic Transmedia: A Physiological Look at Engagement with Marvel\u27s Cinematic Universe as Measured by Brainwaves and Electrodermal Activity

    Get PDF
    This study looks at engagement levels within the Marvel Cinematic Universe (MCU) at physiological and neurological levels, and with self-reported measures. One show or movie in each of three categories of Marvel media (a movie, television show, and streaming show) were shown to participants who then had their brainwaves and galvanic skin response recorded to determine whether or not they were engaged with the transmedia aspects of the MCU. Results showed that participants were consistently engaged with the transmedia throughout all three media types, with brainwaves varying only slightly between each content. The Marvel movie, “The Avengers” was most engaging to participants who had their brainwaves and galvanic skin response rates recorded, while participants in a control group consisting of only a survey agreed with the finding that movies in the MCU were the most enjoyable and were able to keep their interest the longest. There were significant findings between ratings of television shows and engagement with each of the three media types for participants in the control group

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Active learning based on computer vision and human-robot interaction for the user profiling and behavior personalization of an autonomous social robot

    Get PDF
    Social robots coexist with humans in situations where they have to exhibit proper communication skills. Since users may have different features and communicative procedures, personalizing human-robot interactions is essential for the success of these interactions. This manuscript presents Active Learning based on computer vision and human-robot interaction for user recognition and profiling to personalize robot behavior. The system identifies people using Intel-face-detection-retail-004 and FaceNet for face recognition and obtains users" information through interaction. The system aims to improve human-robot interaction by (i) using online learning to allow the robot to identify the users and (ii) retrieving users' information to fill out their profiles and adapt the robot's behavior. Since user information is necessary for adapting the robot for each interaction, we hypothesized that users would consider creating their profile by interacting with the robot more entertaining and easier than taking a survey. We validated our hypothesis with three scenarios: the participants completed their profiles using an online survey, by interacting with a dull robot, or with a cheerful robot. The results show that participants gave the cheerful robot a higher usability score (82.14/100 points), and they were more entertained while creating their profiles with the cheerful robot than in the other scenarios. Statistically significant differences in the usability were found between the scenarios using the robot and the scenario that involved the online survey. Finally, we show two scenarios in which the robot interacts with a known user and an unknown user to demonstrate how it adapts to the situation.The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Spain Ministry of Science, Innovation and Universities; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spain Ministry of Science and Innovation. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/5011000-11033 and by the European Union NextGenerationEU/PRTR

    How Infants Perceive Animated Films

    Full text link
    Today, many infants begin consistently viewing videos at 4 to 9 months of age. Due to their reduced mobility and linguistic immaturity younger infants are good watchers, spending a lot of time sitting and watching the actions and (also emotional) reactions of both real and televised people as well as animated characters. Since babies can perceive the similarity between a 2-dimensional image and the real 3-dimensional entity that is depicted, they respond to the video image of another person with smiles and increased activity, much as they would to the actual person. Furthermore, emotional reaction of a televised person can influence their behaviour. Infant attention to films as to natural scenes begins by being stimulus-driven and progresses to top-down control as the child matures cognitively and acquires general world knowledge. The producers of infant-directed animations however use low-level visual features to guide infants’ attention to semantic information which might explain infants’ preference for them. In this chapter, we will discuss the developmental foundations of (animated) film cognition, focusing mainly on the perception of emotional cues based on recent empirical findings

    How infants perceive animated films

    Get PDF
    Book synopsis: Ranging from blockbuster movies to experimental shorts or documentaries to scientific research, computer animation shapes a great part of media communication processes today. Be it the portrayal of emotional characters in moving films or the creation of controllable emotional stimuli in scientific contexts, computer animation’s characteristic artificiality makes it ideal for various areas connected to the emotional: with the ability to move beyond the constraints of the empirical "real world," animation allows for an immense freedom. This book looks at international film productions using animation techniques to display and/or to elicit emotions, with a special attention to the aesthetics, characters and stories of these films, and to the challenges and benefits of using computer techniques for these purposes

    Effective design, configuration, and use of digital CCTV

    Get PDF
    It is estimated that there are five million CCTV cameras in use today. CCTV is used by a wide range of organisations and for an increasing number of purposes. Despite this, there has been little research to establish whether these systems are fit for purpose. This thesis takes a socio-technical approach to determine whether CCTV is effective, and if not, how it could be made more effective. Humancomputer interaction (HCI) knowledge and methods have been applied to improve this understanding and what is needed to make CCTV effective; this was achieved in an extensive field study and two experiments. In Study 1, contextual inquiry was used to identify the security goals, tasks, technology and factors which affected operator performance and the causes at 14 security control rooms. The findings revealed a number of factors which interfered with task performance, such as: poor camera positioning, ineffective workstation setups, difficulty in locating scenes, and the use of low-quality CCTV recordings. The impact of different levels of video quality on identification and detection performance was assessed in two experiments using a task-focused methodology. In Study 2, 80 participants identified 64 face images taken from four spatially compressed video conditions (32, 52, 72, and 92 Kbps). At a bit rate quality of 52 Kbps (MPEG-4), the number of faces correctly identified reached significance. In Study 3, 80 participants each detected 32 events from four frame rate CCTV video conditions (1, 5, 8, and 12 fps). Below 8 frames per second, correct detections and task confidence ratings decreased significantly. These field and empirical research findings are presented in a framework using a typical CCTV deployment scenario, which has been validated through an expert review. The contributions and limitations of this thesis are reviewed, and suggestions for how the framework should be further developed are provided
    corecore