152 research outputs found

    Using geolocated tweets for characterization of Twitter in Portugal and the Portuguese administrative regions

    Get PDF
    The information published by the millions of public social network users is an important source of knowledge that can be used in academic, socioeconomic or demographic studies (distribution of male and female population, age, marital status, birth), lifestyle analysis (interests, hobbies, social habits) or be used to study online behavior (time spent online, interaction with friends or discussion about brands, products or politics). This work uses a database of about 27 million Portuguese geolocated tweets, produced in Portugal by 97.8 K users during a 1-year period, to extract information about the behavior of the geolocated Portuguese Twitter community and show that with this information it is possible to extract overall indicators such as: the daily periods of increased activity per region; prediction of regions where the concentration of the population is higher or lower in certain periods of the year; how do regional habitants feel about life; or what is talked about in each region. We also analyze the behavior of the geolocated Portuguese Twitter users based on the tweeted contents, and find indications that their behavior differs in certain relevant aspect from other Twitter communities, hypothesizing that this is in part due to the abnormal high percentage of young teenagers in the community. Finally, we present a small case study on Portuguese tourism in the Algarve region. To the best of our knowledge, this work is the first study that shows geolocated Portuguese users' behavior in Twitter focusing on geographic regional use.info:eu-repo/semantics/acceptedVersio

    Personal Learning Environments (PLE) in a distance learning course on mathematics applied to business

    Get PDF
    This paper argues that the dominant form of distance learning that is common in most e-learning systems rests on a set of learning devices and environments that may be outdated from the student’s perspective, namely because it is not supportive of learner empowerment and does not facilitate the efforts of self-directed learners. For this study we gathered and examined data on student’s use of Personal Learning Environments (PLEs) within a course on Mathematics Applied to Business offered by the Portuguese Open University (Universidade Aberta). We base the discussion on aspects that characterize student’s conceptions of PLEs, the emergence of connectivism as a new account of how learning occurs in a networked global environment, and conclude that an important goal of online course design should be to let students explore what the emergent Web 2.0 tools have to offer in distance learning. The widespread adoption of PLEs, bringing together learning from different contexts and sources of learning, shows that students are capable of expression in different forms, generating an added-value to distance learning environments

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC

    Rapport : a fact-based question answering system for portuguese

    Get PDF
    Question answering is one of the longest-standing problems in natural language processing. Although natural language interfaces for computer systems can be considered more common these days, the same still does not happen regarding access to specific textual information. Any full text search engine can easily retrieve documents containing user specified or closely related terms, however it is typically unable to answer user questions with small passages or short answers. The problem with question answering is that text is hard to process, due to its syntactic structure and, to a higher degree, to its semantic contents. At the sentence level, although the syntactic aspects of natural language have well known rules, the size and complexity of a sentence may make it difficult to analyze its structure. Furthermore, semantic aspects are still arduous to address, with text ambiguity being one of the hardest tasks to handle. There is also the need to correctly process the question in order to define its target, and then select and process the answers found in a text. Additionally, the selected text that may yield the answer to a given question must be further processed in order to present just a passage instead of the full text. These issues take also longer to address in languages other than English, as is the case of Portuguese, that have a lot less people working on them. This work focuses on question answering for Portuguese. In other words, our field of interest is in the presentation of short answers, passages, and possibly full sentences, but not whole documents, to questions formulated using natural language. For that purpose, we have developed a system, RAPPORT, built upon the use of open information extraction techniques for extracting triples, so called facts, characterizing information on text files, and then storing and using them for answering user queries done in natural language. These facts, in the form of subject, predicate and object, alongside other metadata, constitute the basis of the answers presented by the system. Facts work both by storing short and direct information found in a text, typically entity related information, and by containing in themselves the answers to the questions already in the form of small passages. As for the results, although there is margin for improvement, they are a tangible proof of the adequacy of our approach and its different modules for storing information and retrieving answers in question answering systems. In the process, in addition to contributing with a new approach to question answering for Portuguese, and validating the application of open information extraction to question answering, we have developed a set of tools that has been used in other natural language processing related works, such as is the case of a lemmatizer, LEMPORT, which was built from scratch, and has a high accuracy. Many of these tools result from the improvement of those found in the Apache OpenNLP toolkit, by pre-processing their input, post-processing their output, or both, and by training models for use in those tools or other, such as MaltParser. Other tools include the creation of interfaces for other resources containing, for example, synonyms, hypernyms, hyponyms, or the creation of lists of, for instance, relations between verbs and agents, using rules

    Diálogos com a arte. Revista de arte, cultura e educação, n.º 11

    Get PDF
    The journal "Diálogos com a Arte. Revista de Arte, Cultura e Educação " is an indexed annual journal of international circulation, published since 2010, and edited by the School of Education of the Polytechnic Institute of Viana do Castelo (ESE-IPVC) in collaboration with the Center for Research in Child Studies of the University of Minho (CIEC-UM). The journal offers students, teachers and researchers in the arts the possibility of reflecting on both national and international theories and practices about art, culture and education The editorial board defines cooperation as a form of cultural activism that necessitates acting on problems and sharing actions and experiences. Cooperation is successfully accomplished when all the participants’ objectives are shared and the results are beneficial for everyone. This requires constant dialogue and ensuring relations in educational programs, projects, community interventions, artistic and cultural training, and teacher education.A revista “Diálogos com a Arte. Revista de Arte, Cultura e Educação” é uma revista anual indexada, de circulação internacional, publicada desde 2010, e editada pela Escola Superior de Educação do Instituto Politécnico de Viana do Castelo (ESSE-IPVC) em colaboração com o Centro de Investigação em Estudos da Criança da Universidade do Minho (CIEC-UM). A revista oferece a alunos, professores e investigadores no campo das artes a possibilidade de reflexão sobre teorias e práticas artísticas, culturais e educacionais nos âmbitos nacional e internacional. A equipa editorial define a cooperação como uma forma de activismo cultural que precisa de acção sobre os problemas e de partilha de experiências. A cooperação é alcançada com sucesso quando todos os objectivos dos participantes são partilhados e os resultados são benéficos para todos. Isto exige um diálogo constante e a garantia do estabelecimento de relações entre programas educacionais, projectos, intervenções comunitárias, formação artística e cultural e formação de professores

    Runtime reconfiguration of physical and virtual pervasive systems

    Full text link
    Today, almost everyone comes in contact with smart environments during their everyday’s life. Environments such as smart homes, smart offices, or pervasive classrooms contain a plethora of heterogeneous connected devices and provide diverse services to users. The main goal of such smart environments is to support users during their daily chores and simplify the interaction with the technology. Pervasive Middlewares can be used for a seamless communication between all available devices and by integrating them directly into the environment. Only a few years ago, a user entering a meeting room had to set up, for example, the projector and connect a computer manually or teachers had to distribute files via mail. With the rise of smart environments these tasks can be automated by the system, e.g., upon entering a room, the smartphone automatically connects to a display and the presentation starts. Besides all the advantages of smart environments, they also bring up two major problems. First, while the built-in automatic adaptation of many smart environments is often able to adjust the system in a helpful way, there are situations where the user has something different in mind. In such cases, it can be challenging for unexperienced users to configure the system to their needs. Second, while users are getting increasingly mobile, they still want to use the systems they are accustomed to. As an example, an employee on a business trip wants to join a meeting taking place in a smart meeting room. Thus, smart environments need to be accessible remotely and should provide all users with the same functionalities and user experience. For these reasons, this thesis presents the PerFlow system consisting of three parts. First, the PerFlow Middleware which allows the reconfiguration of a pervasive system during runtime. Second, with the PerFlow Tool unexperi- enced end users are able to create new configurations without having previous knowledge in programming distributed systems. Therefore, a specialized visual scripting language is designed, which allows the creation of rules for the commu- nication between different devices. Third, to offer remote participants the same user experience, the PerFlow Virtual Extension allows the implementation of pervasive applications for virtual environments. After introducing the design for the PerFlow system, the implementation details and an evaluation of the developed prototype is outlined. The evaluation discusses the usability of the system in a real world scenario and the performance implications of the middle- ware evaluated in our own pervasive learning environment, the PerLE testbed. Further, a two stage user study is introduced to analyze the ease of use and the usefulness of the visual scripting tool

    Sistema inteligente de recolha e armazenamento de informação proveniente do Twitter

    Get PDF
    Independentemente do grau de conhecimento e utilização das redes sociais é inegável a sua importância na sociedade contemporânea. Publicitar um evento, comentar ou divulgar uma ideia são práticas comuns nas redes sociais, tornando-as num meio propício à expressão da opinião individual e sua disseminação através dos vários canais levando, consequentemente, à conceção e formação de juízos de valor e facto acerca das mudanças e acontecimentos no mundo que nos rodeia. Analisar e monitorizar sentimentos relativos a uma organização em especifico, prever vendas e aceitação de um produto ou serviço por parte do consumidor, antecipar a propagação de um vírus pela população, são exemplos concretos de como a informação recolhida nas redes sociais, pode ser útil em diversos campos da investigação (áreas como o turismo, marketing e saúde são as que mais se tem vindo a fortalecer mediante este fenómeno). Considerando tal relevância, levantam-se questões acerca do impacto que as redes sociais têm na atual sociedade e indubitavelmente debate-se a temática de como tratar e abordar essa informação de forma analítica e efetivamente útil. Para construir (ou desconstruir) um fato credível, é necessário um volume considerável de dados e uma cobertura assinalável do conjunto de utilizadores do Twitter. Diversos autores que desenvolveram trabalhos relacionados com esta problemática, têm constatado dificuldade em obter volumes significativos de informação, por limitação do Twitter em fornecer acesso aos seus dados. Perante estas circunstâncias, os dados recolhidos estão muitas vezes condicionados a uma análise limitada onde se torna complexo compreender os verdadeiros contornos das questões, ou por vezes são consideradas apenas algumas das suas características, de modo a simplificar a modelação e armazenamento. Tendo como premissa reduzir este enviesamento de informação, o objetivo deste trabalho consiste em desenvolver uma arquitetura para construção de um corpus de tweets tentando ultrapassar as limitações impostas pelo Twitter. Explora-se o paradigma das bases de dados NoSQL de modo a armazenar integralmente cada tweet, resultando num Sistema de Informação que automatiza a recolha, processamento, armazenamento e acesso a um volume considerável de tweets, produzidos em Portugal por autores portugueses e escritos em Português Europeu. A arquitetura apresentada produz um corpus de tweets produzidos em tempo real, que contêm indicação da sua geolocalização. A partir de tweets geolocalizados é efetuada a expansão do corpus pela leitura da timeline dos autores de tweets geolocalizados, conseguindo-se a recuperação de grande parte da informação produzida por estes. Em média são recuperados cerca de 530 mil tweets por dia.Regardless the degree of knowledge and use of social networks, it is undeniable its importance in contemporary society. Advertise an event, comment or release an idea are common practices in social networks, making them an environment conducive to the expression of individual opinion and its dissemination through the main channels, leading consequently to the build of judgments of value and fact about changes and developments in the world around us. Analyze and monitor feelings relating to a specific organization, sales forecasting and acceptance of a product or service by the consumer, anticipate propagation of a virus among the population, are concrete examples of how the information collected on social networks can be useful in several fields of research (areas such as tourism, marketing and health are the most contemplated by this phenomenon). Considering such relevance, arise questions about the impact that social networks have in society and, undoubtedly, it is debated how to treat analytically and effectively this information, making it really useful information. To construct (or deconstruct) a credible fact, it is needed a considerable amount of data and a remarkable coverage of Twitter users. Several authors, who developed works related to this issue, have found difficulty in obtaining large volumes of information, having in account the limitation of Twitter concerning to give access to private data. In those circumstances, the data collected are often constrained to a limited analysis and becomes complex to understand the true contours of the themes. Sometimes it is even considered only some of the many characteristics in order to simplify the modeling and storage. Having as a premise reduce this skewing of information, the objective of this work is to develop an architecture having as a foundation the building of a corpus of tweets in attempt to overcome the limitations imposed by Twitter. It is exploited the paradigm of NoSQL databases in order to fully store each tweet, resulting in an Information System that automates the collection, processing, storage and access to a considerable volume of tweets, produced in Portugal, by Portuguese authors and written in European Portuguese. The presented architecture produces a corpus of tweets done in real time containing indication of its geolocation. From geolocated tweets is made the expansion of corpus by reading the timeline of the authors of geolocated tweets and it is possible to recover much of the information produced by them. On average, are recovered 530K tweets per day

    Software-Defined Networking: A Comprehensive Survey

    Get PDF
    peer reviewedThe Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment

    Annual record no. 49

    Get PDF
    INHIGEO produces an annual publication that includes information on the commission's activities, national reports, book reviews, interviews and occasional historical articles.N

    Selected papers on Hands-on Science II

    Get PDF
    This second volume of the "Selected Papers on Hands-on Science" the Hands-on Science Network is publishing, reunites some of the most relevant works presented at the 2008, 2009, 2010 and 2011 editions of the annual International Conference on Hands-on Science. From pre-school science education to lifelong science learning and teacher training, in formal non-formal and informal contexts, the large diversified range of works that conforms this book surely renders it an important tool to schools and educators and all involved in science education and on the promotion of scientific literacy.info:eu-repo/semantics/publishedVersio
    corecore