64,901 research outputs found

    Optical access networks: business guidelines and policy recommendations

    Get PDF
    Within the European FP7 project OASE, we have studied different business models for optical access networks. Based on an exploration of existing FTTH cases in Sweden, the Netherland and Germany, we developed a model for a cost-benefit analysis for the physical infrastructure provider (PIP) as well as the network provider (NP). Our evaluations have shown that the business case for the PIP is very difficult, even impossible in sparsely populated areas. Demand aggregation is an effective measure to guarantee earlier return on investment for the PIP. In-house deployment and CPE are significant cost factors for the NP. Business models that allow to allocate these costs to house or home owners should get enough attention. Furthermore, open access on fiber, wavelength and bit stream level allows for additional competition but also leads to additional opportunities and costs. Finally, some cross-sectorial effects can be expected from a fiber deployment. This could be an additional stimulus for national, regional or municipal governments to invest. In this way public support may be desirable

    ICMI 2012 chairs' welcome

    Get PDF
    Welcome to Santa Monica and to the 14th edition of the International Conference on Multimodal Interaction, ICMI 2012. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. We had a record number of submissions this year: 147 (74 long papers, 49 short papers, 5 special session papers and 19 demo papers). From these submissions, we accepted 15 papers for long oral presentation (20.3% acceptance rate), 10 papers for short oral presentation (20.4% acceptance rate) and 19 papers presented as posters. We have a total acceptance rate of 35.8% for all short and long papers. 12 of the 19 demo papers were accepted. All 5 special session papers were directly invited by the organizers and the papers were all accepted. In addition, the program includes three invited Keynote talks. One of the two novelties introduced at ICMI this year is the Multimodal Grand Challenges. Developing systems that can robustly understand human-human communication or respond to human input requires identifying the best algorithms and their failure modes. In fields such as computer vision, speech recognition, and computational linguistics, the availability of datasets and common tasks have led to great progress. This year, we accepted four challenge workshops: the Audio-Visual Emotion Challenge (AVEC), the Haptic Voice Recognition challenge, the D-META challenge and Brain-Computer Interface challenge. Stefanie Telex and Daniel Gatica-Perez are co-chairing the grand challenge this year. All four Grand Challenges will be presented on Monday, October 22nd, and a summary session will be happening on Wednesday, October 24th, afternoon during the main conference. The second novelty at ICMI this year is the Doctoral Consortium—a separate, one-day event to take place on Monday, October 22nd, co-chaired by Bilge Mutlu and Carlos Busso. The goal of the Doctoral Consortium is to provide Ph.D. students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial backgrounds and institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing multimodal interfaces. All accepted students receive a travel grant to attend the conference. From among 25 applications, 14 students were accepted for participation and to receive travel funding. The organizers thank the National Science Foundation (award IIS-1249319) and conference sponsors for financial support

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    CLIVAR Exchanges No. 2

    Get PDF

    Exploring personality-targeted UI design in online social participation systems

    Get PDF
    We present a theoretical foundation and empirical findings demonstrating the effectiveness of personality-targeted design. Much like a medical treatment applied to a person based on his specific genetic profile, we argue that theory-driven, personality-targeted UI design can be more effective than design applied to the entire population. The empirical exploration focused on two settings, two populations and two personality traits: Study 1 shows that users' extroversion level moderates the relationship between the UI cue of audience size and users' contribution. Study 2 demonstrates that the effectiveness of social anchors in encouraging online contributions depends on users' level of emotional stability. Taken together, the findings demonstrate the potential and robustness of the interactionist approach to UI design. The findings contribute to the HCI community, and in particular to designers of social systems, by providing guidelines to targeted design that can increase online participation. Copyright © 2013 ACM

    How much control is enough? Optimizing fun with unreliable input

    Get PDF
    Brain-computer interfaces (BCI) provide a valuable new input modality within human- computer interaction systems, but like other body-based inputs, the system recognition of input commands is far from perfect. This raises important questions, such as: What level of control should such an interface be able to provide? What is the relationship between actual and perceived control? And in the case of applications for entertainment in which fun is an important part of user experience, should we even aim for perfect control, or is the optimum elsewhere? In this experiment the user plays a simple game in which a hamster has to be guided to the exit of a maze, in which the amount of control the user has over the hamster is varied. The variation of control through confusion matrices makes it possible to simulate the experience of using a BCI, while using the traditional keyboard for input. After each session the user �lled out a short questionnaire on fun and perceived control. Analysis of the data showed that the perceived control of the user could largely be explained by the amount of control in the respective session. As expected, user frustration decreases with increasing control. Moreover, the results indicate that the relation between fun and control is not linear. Although in the beginning fun does increase with improved control, the level of fun drops again just before perfect control is reached. This poses new insights for developers of games wanting to incorporate some form of BCI in their game: for creating a fun game, unreliable input can be used to create a challenge for the user
    corecore