198 research outputs found

    Providing a better user-interface to explore large product spaces

    Get PDF
    Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (leaves 59-60).y Ankur Chandra.S.B.and M.Eng

    Unified Implicit and Explicit Feedback for Multi-Application User Interest Modeling

    Get PDF
    A user often interacts with multiple applications while working on a task. User models can be developed individually at each of the individual applications, but there is no easy way to come up with a more complete user model based on the distributed activity of the user. To address this issue, this research studies the importance of combining various implicit and explicit relevance feedback indicators in a multi-application environment. It allows different applications used for different purposes by the user to contribute user activity and its context to mutually support users with unified relevance feedback. Using the data collected by the web browser, Microsoft Word and Microsoft PowerPoint, Adobe Acrobat Writer and VKB, combinations of implicit relevance feedback with semi-explicit relevance feedback were analyzed and compared with explicit user ratings. Our past research show that multi-application interest models based on implicit feedback theoretically out performed single application interest models based on implicit feedback. Also in practice, a multi-application interest model based on semi-explicit feedback increased user attention to high-value documents. In the current dissertation study, we have incorporated topic modeling to represent interest in user models for textual content and compared similarity measures for improved recall and precision based on the text content. We also learned the relative value of features from content consumption applications and content production applications. Our experimental results show that incorporating implicit feedback in page-level user interest estimation resulted in significant improvements over the baseline models. Furthermore, incorporating semi-explicit content (e.g. annotated text) with the authored text is effective in identifying segment-level relevant content. We have evaluated the effectiveness of the recommendation support from both semi-explicit model (authored/annotated text) and unified model (implicit + semi-explicit) and have found that they are successful in allowing users to locate the content easily because the relevant details are selectively highlighted and recommended documents and passages within documents based on the user’s indicated interest. Our recommendations based on the semi-explicit feedback were viewed the same as those from unified feedback and recommendations based on semi-explicit feedback outperformed those from unified feedback in terms of matching post-task document assessments

    Toward effective conversational messaging

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1995.Includes bibliographical references (leaves 118-123).Matthew Talin Marx.M.S

    INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS

    Get PDF
    Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions. With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants. IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS. This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in 12 each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency. The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems

    Quality of experience aware adaptive hypermedia system

    Get PDF
    The research reported in this thesis proposes, designs and tests a novel Quality of Experience Layer (QoE-layer) for the classic Adaptive Hypermedia Systems (AHS) architecture. Its goal is to improve the end-user perceived Quality of Service in different operational environments suitable for residential users. While the AHS’ main role of delivering personalised content is not altered, its functionality and performance is improved and thus the user satisfaction with the service provided. The QoE Layer takes into account multiple factors that affect Quality of Experience (QoE), such as Web components and network connection. It uses a novel Perceived Performance Model that takes into consideration a variety of performance metrics, in order to learn about the Web user operational environment characteristics, about changes in network connection and the consequences of these changes on the user’s quality of experience. This model also considers the user’s subjective opinion about his/her QoE, increasing its effectiveness and suggests strategies for tailoring Web content in order to improve QoE. The user related information is modelled using a stereotype-based technique that makes use of probability and distribution theory. The QoE-Layer has been assessed through both simulations and qualitative evaluation in the educational area (mainly distance learning), when users interact with the system in a low bit rate operational environment. The simulations have assessed “learning” and “adaptability” behaviour of the proposed layer in different and variable home connections when a learning task is performed. The correctness of Perceived Performance Model (PPM) suggestions, access time of the learning process and quantity of transmitted data were analysed. The results show that the QoE layer significantly improves the performance in terms of the access time of the learning process with a reduction in the quantity of data sent by using image compression and/or elimination. A visual quality assessment confirmed that this image quality reduction does not significantly affect the viewers’ perceived quality that was close to “good” perceptual level. For qualitative evaluation the QoE layer has been deployed on the open-source AHA! system. The goal of this evaluation was to compare the learning outcome, system usability and user satisfaction when AHA! and QoE-ware AHA systems were used. The assessment was performed in terms of learner achievement, learning performance and usability assessment. The results indicate that QoE-aware AHA system did not affect the learning outcome (the students have similar-learning achievements) but the learning performance was improved in terms of study time. Most significantly, QoE-aware AHA provides an important improvement in system usability as indicated by users’ opinion about their satisfaction related to QoE

    Web-based Named Entity Recognition and Data Integration to Accelerate Molecular Biology Research

    Get PDF
    Finding information about a biological entity is a step tightly bound to molecular biology research. Despite ongoing efforts, this task is both tedious and time consuming, and tends to become Sisyphean as the number of entities increases. Our aim is to assist researchers by providing them with summary information about biological entities while they are browsing the web, as well as with simplified programmatic access to biological data. To materialise this aim we employ emerging web technologies offering novel web-browsing experiences and new ways of software communication Reflect is a tool that couples biological named entity recognition with informative summaries, and can be applied to any web page, during web browsing. Invoked either via its browser extensions or via its web page, Reflect highlights gene, protein and chemical molecule names in a web page, and, dynamically, attaches to them summary information. The latter provides an overview of what is known about the entity, such as a description, the domain composition, the 3D structure and links to more detailed resources. The annotation process occurs via easy-to-use interfaces. The fast performance allows for Reflect to be an interactive companion for scientific readers/researchers, while they are surfing the internet. OnTheFly is a web-based application that not only extends Reflect functionality to Microsoft Word, Microsoft Excel, PDF and plain text format files, but also supports the extraction of networks of known and predicted interactions about the entities recognised in a document. A combination of Reflect and OnTheFly offers a data annotation solution for documents used by life science researchers throughout their work. EasySRS is a set of remote methods that expose the functionality of the Sequence Retrieval System (SRS), a data integration platform used in providing access to life science information including genetic, protein, expression and pathway data. EasySRS supports simultaneous queries to all of the integrated resources. Accessed from a single point, via the web, and based on a simple, common query format, EasySRS facilitates the task of biological data collection and annotation. EasySRS has been employed to enrich the entries of a Plant Defence Mechanism database. UniprotProfiler is a prototype application that employs EasySRS to generate graphs of knowledge based on database record cross-references. These graphs are converted into 3D diagrams of interconnected data. The 3D diagram generation occurs via Systems Biology visualisation tools that employ intuitive graphs to replace long result lists and facilitate hypothesis generation and knowledge discovery

    Exploring data sharing obligations in the technology sector

    Get PDF
    This report addresses the question: What is the role of data in the technology sector and what are the opportunities and risks of mandatory data sharing? The answer provides insights into costs and benefits of variants of data sharing obligations with and between technology companies

    Implicit image annotation by using gaze analysis

    Get PDF
    PhDThanks to the advances in technology, people are storing a massive amount of visual information in the online databases. Today it is normal for a person to take a photo of an event with their smartphone and effortlessly upload it to a host domain. For later quick access, this enormous amount of data needs to be indexed by providing metadata for their content. The challenge is to provide suitable captions for the semantics of the visual content. This thesis investigates the possibility of extracting and using the valuable information stored inside human’s eye movements when interacting with digital visual content in order to provide information for image annotation implicitly. A non-intrusive framework is developed which is capable of inferring gaze movements to classify the visited images by a user into two classes when the user is searching for a Target Concept (TC) in the images. The first class is formed of the images that contain the TC and it is called the TC+ class and the second class is formed of the images that do not contain the TC and it is called the TC- class. By analysing the eye-movements only, the developed framework was able to identify over 65% of the images that the subject users were searching for with the accuracy over 75%. This thesis shows that the existing information in gaze patterns can be employed to improve the machine’s judgement of image content by assessment of human attention to the objects inside virtual environments.European Commission funded Network of Excellence PetaMedi

    Designing a Foot Input System for Productive Work at a Standing Desk

    Get PDF
    In this thesis we present Tap-Kick-Click, a foot interaction system for controlling common desktop applications. This system enables computer workers to take healthy and productive breaks from using a keyboard and mouse and demonstrates foot interaction techniques which could be applied in other contexts. Our work supplements the existing literature on foot based interaction, as no published work has combined foot input with a standing desk or attempted control of conventional desktop applications. We describe two experiments to investigate questions about the human performance characteristics of foot input relevant to our application which were unanswered in the existing literature. These experiments investigated the effect of target size, direction and distance; the difference between dominant and non-dominant foot; the use of tapping and kicking interaction; and the impact of displaying or hiding a foot cursor. Based on our results we present a set of design guidelines including a suggested minimum target size; a recommendation to ignore foot dominance; and a preference ranking for direction and foot action. These design guidelines informed the design of Tap-Kick-Click, which we describe in detail. It uses a sensing technique using a Microsoft Kinect depth camera and a pair of augmented slippers capable of robustly sensing foot position, kicking and tapping. The primary interaction technique is based on combinations of foot action and directional tapping in a low-density target layout, supported by feedback and instructions presented in an always visible sidebar. This technique is supplemented with a system for selecting elements in a GUI, a high-density target layout for selecting items from a menu, and a help screen. We illustrate the usefulness of Tap-Kick-Click by describing how it can be used to control a web browser, a citation manager and a debugger. Finally, we present the results of a study conducted to evaluate whether new users could learn and use the system in a web browser context. The study demonstrated that users are successfully able to learn and use the system, along with providing areas for improvement.4 month
    • 

    corecore