2,201 research outputs found

    Optical tomography: Image improvement using mixed projection of parallel and fan beam modes

    Get PDF
    Mixed parallel and fan beam projection is a technique used to increase the quality images. This research focuses on enhancing the image quality in optical tomography. Image quality can be defined by measuring the Peak Signal to Noise Ratio (PSNR) and Normalized Mean Square Error (NMSE) parameters. The findings of this research prove that by combining parallel and fan beam projection, the image quality can be increased by more than 10%in terms of its PSNR value and more than 100% in terms of its NMSE value compared to a single parallel beam

    Sentiment Analysis in the Era of Web 2.0: Applications, Implementation Tools and Approaches for the Novice Researcher

    Get PDF
    Nowadays, people find it easier to express opinions via social media-formally known as Web 2.0. Sentiment analysis is an essential field under natural language processing in Computer Science that deals with analyzing people's opinions on the subject matter and discovering the polarity they contain. These opinions could be processed in collective form (as a document) or segments or units as sentences or phrases. Sentiment analysis can be applied in education, research optimization, politics, business, education, health, science and so on, thus forming massive data that requires efficient tools and techniques for analysis. Furthermore, the standard tools currently used for data collection, such as online surveys, interviews, and student evaluation of teachers, limit respondents in expressing opinions to the researcher's surveys and could not generate huge data as Web 2.0 becomes bigger. Sentiment analysis techniques are classified into three (3): Machine learning algorithms, lexicon and hybrid. This study explores sentiment analysis of Web 2.0 for novice researchers to promote collaboration and suggest the best tools for sentiment data analysis and result efficiency. Studies show that machine learning approaches result in large data sets on document-level sentiment classification. In some studies, hybrid techniques that combine machine learning and lexicon-based performance are better than lexicon. Python and R programming are commonly used tools for sentiment analysis implementation, but SentimentAnalyzer and SentiWordnet are recommended for the novice. Keywords:   Sentiment Analysis; Web 2.0; Applications; Tools; Novic

    Recognition on Online Social Network by user's writing style

    Get PDF
    Compromising legitimate accounts is the most popular way of disseminating fraudulent content in Online Social Networks (OSN). To address this issue, we propose an approach for recognition of compromised Twitter accounts based on Authorship Verification. Our solution can detect accounts that became compromised by analysing their user writing styles. This way, when an account content does not match its user writing style, we affirm that the account has been compromised, similar to Authorship Verification. Our approach follows the profile-based paradigm and uses N-grams as its kernel. Then, a threshold is found to represent the boundary of an account writing style. Experiments were performed using two subsampled datasets from Twitter. Experimental results showed the developed model is very suitable for compromised recognition of Online Social Networks accounts due to the capacity of recognizing user styles over 95% accuracy for both datasets

    Developing front-end Web 2.0 technologies to access services, content and things in the future Internet

    Get PDF
    The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative

    Exploring user-generated content for improving destination knowledge: the case of two world heritage cities

    Get PDF
    This study explores twoWorld Heritage Sites (WHS) as tourism destinations by applying several uncommon techniques in these settings: Smart Tourism Analytics, namely Text mining, Sentiment Analysis, and Market Basket Analysis, to highlight patterns according to attraction, nationality, and repeated visits. Salamanca (Spain) and Coimbra (Portugal) are analyzed and compared based on 8,638 online travel reviews (OTR), from TripAdvisor (2017–2018). Findings show that WHS reputation does not seem to be relevant to visitors-reviewers. Additionally, keyword extraction reveals that the reviews do not di er from language to language or from city to city, and it was also possible to identify several keywords related to history and heritage; in particular, architectural styles, names of kings, and places. The study identifies topics that could be used by destination management organizations to promote these cities, highlights the advantages of applying a data science approach, and confirms the rich information value of OTRs as a tool to (re)position the destination according to smart tourism design tenets.FCT: UIDB/04470/2020info:eu-repo/semantics/publishedVersio

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Tag disambiguation based on social network information

    No full text
    Within 20 years the Web has grown from a tool for scientists at CERN into a global information space. While returning to its roots as a read/write tool, its entering a more social and participatory phase. Hence a new, improved version called the Social Web where users are responsible for generating and sharing content on the global information space, they are also accountable for replicating the information. This collaborative activity can be observed in two of the most widely practised Social Web services such as social network sites and social tagging systems. Users annotate their interests and inclinations with free form keywords while they share them with their social connections. Although these keywords (tag) assist information organization and retrieval, theysuffer from polysemy.In this study we employ the effectiveness of social network sites to address the issue of ambiguity in social tagging. Moreover, we also propose that homophily in social network sites can be a useful aspect is disambiguating tags. We have extracted the ‘Likes’ of 20 Facebook users and employ them in disambiguation tags on Flickr. Classifiers are generated on the retrieved clusters from Flickr using K-Nearest-Neighbour algorithm and then their degree of similarity is calculated with user keywords. As tag disambiguation techniques lack gold standards for evaluation, we asked the users to indicate the contexts and used them as ground truth while examining the results. We analyse the performance of our approach by quantitative methods and report successful results. Our proposed method is able classify images with an accuracy of 6 out of 10 (on average). Qualitative analysis reveal some factors that affect the findings, and if addressed can produce more precise results
    corecore