276 research outputs found

    Interim research assessment 2003-2005 - Computer Science

    Get PDF
    This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities

    Contextual Social Networking

    Get PDF
    The thesis centers around the multi-faceted research question of how contexts may be detected and derived that can be used for new context aware Social Networking services and for improving the usefulness of existing Social Networking services, giving rise to the notion of Contextual Social Networking. In a first foundational part, we characterize the closely related fields of Contextual-, Mobile-, and Decentralized Social Networking using different methods and focusing on different detailed aspects. A second part focuses on the question of how short-term and long-term social contexts as especially interesting forms of context for Social Networking may be derived. We focus on NLP based methods for the characterization of social relations as a typical form of long-term social contexts and on Mobile Social Signal Processing methods for deriving short-term social contexts on the basis of geometry of interaction and audio. We furthermore investigate, how personal social agents may combine such social context elements on various levels of abstraction. The third part discusses new and improved context aware Social Networking service concepts. We investigate special forms of awareness services, new forms of social information retrieval, social recommender systems, context aware privacy concepts and services and platforms supporting Open Innovation and creative processes. This version of the thesis does not contain the included publications because of copyrights of the journals etc. Contact in terms of the version with all included publications: Georg Groh, [email protected] zentrale Gegenstand der vorliegenden Arbeit ist die vielschichtige Frage, wie Kontexte detektiert und abgeleitet werden können, die dazu dienen können, neuartige kontextbewusste Social Networking Dienste zu schaffen und bestehende Dienste in ihrem Nutzwert zu verbessern. Die (noch nicht abgeschlossene) erfolgreiche Umsetzung dieses Programmes führt auf ein Konzept, das man als Contextual Social Networking bezeichnen kann. In einem grundlegenden ersten Teil werden die eng zusammenhängenden Gebiete Contextual Social Networking, Mobile Social Networking und Decentralized Social Networking mit verschiedenen Methoden und unter Fokussierung auf verschiedene Detail-Aspekte näher beleuchtet und in Zusammenhang gesetzt. Ein zweiter Teil behandelt die Frage, wie soziale Kurzzeit- und Langzeit-Kontexte als für das Social Networking besonders interessante Formen von Kontext gemessen und abgeleitet werden können. Ein Fokus liegt hierbei auf NLP Methoden zur Charakterisierung sozialer Beziehungen als einer typischen Form von sozialem Langzeit-Kontext. Ein weiterer Schwerpunkt liegt auf Methoden aus dem Mobile Social Signal Processing zur Ableitung sinnvoller sozialer Kurzzeit-Kontexte auf der Basis von Interaktionsgeometrien und Audio-Daten. Es wird ferner untersucht, wie persönliche soziale Agenten Kontext-Elemente verschiedener Abstraktionsgrade miteinander kombinieren können. Der dritte Teil behandelt neuartige und verbesserte Konzepte für kontextbewusste Social Networking Dienste. Es werden spezielle Formen von Awareness Diensten, neue Formen von sozialem Information Retrieval, Konzepte für kontextbewusstes Privacy Management und Dienste und Plattformen zur Unterstützung von Open Innovation und Kreativität untersucht und vorgestellt. Diese Version der Habilitationsschrift enthält die inkludierten Publikationen zurVermeidung von Copyright-Verletzungen auf Seiten der Journals u.a. nicht. Kontakt in Bezug auf die Version mit allen inkludierten Publikationen: Georg Groh, [email protected]

    ‘IMPLICIT CREATION’ – NON-PROGRAMMER CONCEPTUAL MODELS FOR AUTHORING IN INTERACTIVE DIGITAL STORYTELLING

    Get PDF
    Interactive Digital Storytelling (IDS) constitutes a research field that emerged from several areas of art, creation and computer science. It inquires technologies and possible artefacts that allow ‘highly-interactive’ experiences of digital worlds with compelling stories. However, the situation for story creators approaching ‘highly-interactive’ storytelling is complex. There is a gap between the available technology, which requires programming and prior knowledge in Artificial Intelligence, and established models of storytelling, which are too linear to have the potential to be highly interactive. This thesis reports on research that lays the ground for bridging this gap, leading to novel creation philosophies in future work. A design research process has been pursued, which centred on the suggestion of conceptual models, explaining a) process structures of interdisciplinary development, b) interactive story structures including the user of the interactive story system, and c) the positioning of human authors within semi-automated creative processes. By means of ‘implicit creation’, storytelling and modelling of simulated worlds are reconciled. The conceptual models are informed by exhaustive literature review in established neighbouring disciplines. These are a) creative principles in different storytelling domains, such as screenwriting, video game writing, role playing and improvisational theatre, b) narratological studies of story grammars and structures, and c) principles of designing interactive systems, in the areas of basic HCI design and models, discourse analysis in conversational systems, as well as game- and simulation design. In a case study of artefact building, the initial models have been put into practice, evaluated and extended. These artefacts are a) a conceived authoring tool (‘Scenejo’) for the creation of digital conversational stories, and b) the development of a serious game (‘The Killer Phrase Game’) as an application development. The study demonstrates how starting out from linear storytelling, iterative steps of ‘implicit creation’ can lead to more variability and interactivity in the designed interactive story. In the concrete case, the steps included abstraction of dialogues into conditional actions, and creating a dynamic world model of the conversation. This process and artefact can be used as a model illustrating non-programmer approaches to ‘implicit creation’ in a learning process. Research demonstrates that the field of Interactive Digital Storytelling still has to be further advanced until general creative principles can be fully established, which is a long-term endeavour, dependent upon environmental factors. It also requires further technological developments. The gap is not yet closed, but it can be better explained. The research results build groundwork for education of prospective authors. Concluding the thesis, IDS-specific creative principles have been proposed for evaluation in future work

    A software based mentor system

    Get PDF
    This thesis describes the architecture, implementation issues and evaluation of Mentor - an educational support system designed to mentor students in their university studies. Students can ask (by typing) natural language questions and Mentor will use several educational paradigms to present information from its Knowledge Base or from data-mined online Web sites to respond. Typically the questions focus on the student’s assignments or in their preparation for their examinations. Mentor is also pro-active in that it prompts the student with questions such as "Have you started your assignment yet?". If the student responds and enters into a dialogue with Mentor, then, based upon the student’s questions and answers, it guides them through a Directed Learning Path planned by the lecturer, specific to that assessment. The objectives of the research were to determine if such a system could be designed, developed and applied in a large-scale, real-world environment and to determine if the resulting system was beneficial to students using it. The study was significant in that it provided an analysis of the design and implementation of the system as well as a detailed evaluation of its use. This research integrated the Computer Science disciplines of network communication, natural language parsing, user interface design and software agents, together with pedagogies from the Computer Aided Instruction and Intelligent Tutoring System fields of Education. Collectively, these disciplines provide the foundation for the two main thesis research areas of Dialogue Management and Tutorial Dialogue Systems. The development and analysis of the Mentor System required the design and implementation of an easy to use text based interface as well as a hyper- and multi-media graphical user interface, a client-server system, and a dialogue management system based on an extensible kernel. The multi-user Java-based client-server system used Perl-5 Regular Expression pattern matching for Natural Language Parsing along with a state-based Dialogue Manager and a Knowledge Base marked up using the XML-based Virtual Human Markup Language. The kernel was also used in other Dialogue Management applications such as with computer generated Talking Heads. The system also enabled a user to easily program their own knowledge into the Knowledge Base as well as to program new information retrieval or management tasks so that the system could grow with the user. The overall framework to integrate and manage the above components into a usable system employed suitable educational pedagogies that helped in the student’s learning process. The thesis outlines the learning paradigms used in, and summarises the evaluation of, three course-based Case Studies of university students’ perception of the system to see how effective and useful it was, and whether students benefited from using it. This thesis will demonstrate that Mentor met its objectives and was very successful in helping students with their university studies. As one participant indicated: ‘I couldn’t have done without it.

    Intelligent business processes composition based on mas, semantic and cloud integration (IPCASCI)

    Get PDF
    [EN]Component reuse is one of the techniques that most clearly contributes to the evolution of the software industry by providing efficient mechanisms to create quality software. Reuse increases both software reliability, due to the fact that it uses previously tested software components, and development productivity, and leads to a clear reduction in cost. Web services have become are an standard for application development on cloud computing environments and are essential in business process development. These services facilitate a software construction that is relatively fast and efficient, two aspects which can be improved by defining suitable models of reuse. This research work is intended to define a model which contains the construction requirements of new services from service composition. To this end, the composition is based on tested Web services and artificial intelligent tools at our disposal. It is believed that a multi-agent architecture based on virtual organizations is a suitable tool to facilitate the construction of cloud computing environments for business processes from other existing environments, and with help from ontological models as well as tools providing the standard BPEL (Business Process Execution Language). In the context of this proposal, we must generate a new business process from the available services in the platform, starting with the requirement specifications that the process should meet. These specifications will be composed of a semi-free description of requirements to describe the new service. The virtual organizations based on a multi-agent system will manage the tasks requiring intelligent behaviour. This system will analyse the input (textual description of the proposal) in order to deconstruct it into computable functionalities, which will be subsequently treated. Web services (or business processes) stored to be reused have been created from the perspective of SOA architectures and associated with an ontological component, which allows the multi-agent system (based on virtual organizations) to identify the services to complete the reuse process. The proposed model develops a service composition by applying a standard BPEL once the services that will compose the solution business process have been identified. This standard allows us to compose Web services in an easy way and provides the advantage of a direct mapping from Business Process Management Notation diagrams

    Annual Report of the University, 2007-2008, Volumes 1-6

    Get PDF
    Project Summary and Goals Historically, affirmative action policies have evolved from initial programs aimed at providing equal educational opportunities to all students, to the legitimacy of programs that are aimed at achieving diversity in higher education. In June 2003, a U.S. Supreme Court ruling on affirmative action pushed higher education across the threshold toward creating a new paradigm for diversity in the 21 51 century. The court clearly stale that affirmative action is still viable but that our institutions must reconsider our traditional concepts for building diversity in the next few decades. This shift in historical context of diversity in our society has led to an important objective: If a diverse student body is an essential factor in a quality higher education, then it is imperative that elementary, secondary and undergraduate schools fulfill their missions to successfully educate a diverse population. In NM, the success of graduate programs depends on the state\u27s P-12 schools, the community and institutions of higher education, and their shared task of educating all students. Further, when the lens in broadened to view the entire P - 20 educational pipeline, it becomes apparent that the loss of students from elementary school to high school is enormous, constricting the number of students who go on to college. Not only are these of concern to what is happening in terms of their academic education but as well in terms of the communities that are affected to make critical decision and become and stay involved in the political and policy world that affects them. Guiding Principles Engaging Latino Communities for Education New Mexico (ENLACE NM) is a statewide collaboration of gente who represent the voices of underrepresented children and families- people who have historically not had a say in policy initiatives that directly impact them and their communities. Therefore, they, and others from our community, are at the forefront of this initiative. We have developed this collaboration based on a process that empowers these communities to find their voice in the pursuit of social justice and educational access, equity and success

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence

    Quantitative Assessment of Factors in Sentiment Analysis

    Get PDF
    Sentiment can be defined as a tendency to experience certain emotions in relation to a particular object or person. Sentiment may be expressed in writing, in which case determining that sentiment algorithmically is known as sentiment analysis. Sentiment analysis is often applied to Internet texts such as product reviews, websites, blogs, or tweets, where automatically determining published feeling towards a product, or service is very useful to marketers or opinion analysts. The main goal of sentiment analysis is to identify the polarity of natural language text. This thesis sets out to examine quantitatively the factors that have an effect on sentiment analysis. The factors that are commonly used in sentiment analysis are text features, sentiment lexica or resources, and the machine learning algorithms employed. The main aim of this thesis is to investigate systematically the interaction between sentiment analysis factors and machine learning algorithms in order to improve sentiment analysis performance as compared to the opinions of human assessors. A software system known as TJP was designed and developed to support this investigation. The research reported here has three main parts. Firstly, the role of data pre-processing was investigated with TJP using a combination of features together with publically available datasets. This considers the relationship and relative importance of superficial text features such as emoticons, n-grams, negations, hashtags, repeated letters, special characters, slang, and stopwords. The resulting statistical analysis suggests that a combination of all of these features achieves better accuracy with the dataset, and had a considerable effect on system performance. Secondly, the effect of human marked up training data was considered, since this is required by supervised machine learning algorithms. The results gained from TJP suggest that training data greatly augments sentiment analysis performance. However, the combination of training data and sentiment lexica seems to provide optimal performance. Nevertheless, one particular sentiment lexicon, AFINN, contributed better than others in the absence of training data, and therefore would be appropriate for unsupervised approaches to sentiment analysis. Finally, the performance of two sophisticated ensemble machine learning algorithms was investigated. Both the Arbiter Tree and Combiner Tree were chosen since neither of them has previously been used with sentiment analysis. The objective here was to demonstrate their applicability and effectiveness compared to that of the leading single machine learning algorithms, Naïve Bayes, and Support Vector Machines. The results showed that whilst either can be applied to sentiment analysis, the Arbiter Tree ensemble algorithm achieved better accuracy performance than either the Combiner Tree or any single machine learning algorithm
    • …
    corecore