2,040 research outputs found

    Complete Semantics to empower Touristic Service Providers

    Full text link
    The tourism industry has a significant impact on the world's economy, contributes 10.2% of the world's gross domestic product in 2016. It becomes a very competitive industry, where having a strong online presence is an essential aspect for business success. To achieve this goal, the proper usage of latest Web technologies, particularly schema.org annotations is crucial. In this paper, we present our effort to improve the online visibility of touristic service providers in the region of Tyrol, Austria, by creating and deploying a substantial amount of semantic annotations according to schema.org, a widely used vocabulary for structured data on the Web. We started our work from Tourismusverband (TVB) Mayrhofen-Hippach and all touristic service providers in the Mayrhofen-Hippach region and applied the same approach to other TVBs and regions, as well as other use cases. The rationale for doing this is straightforward. Having schema.org annotations enables search engines to understand the content better, and provide better results for end users, as well as enables various intelligent applications to utilize them. As a direct consequence, the region of Tyrol and its touristic service increase their online visibility and decrease the dependency on intermediaries, i.e. Online Travel Agency (OTA).Comment: 18 pages, 6 figure

    Overlapping factors in search engine optimization and web accessibility

    Get PDF
    Purpose - The purpose of this paper is to show that the pursuit of a high search engine relevance ranking for a webpage is not necessarily incompatible with the pursuit of web accessibility. Design/methodology/approach - The research described arose from an investigation into the observed phenomenon that pages from accessible websites regularly appear near the top of search engine (such as Google) results, without any deliberate effort having been made through the application of search engine optimization (SEO) techniques to achieve this. The reasons for this phenomenon appear to be found in the numerous similarities and overlapping characteristics between SEO factors and web accessibility guidelines. Context is provided through a review of sources including accessibility standards and relevant SEO studies and the relationship between SEO and web accessibility is described. The particular overlapping factors between the two are identified and the precise nature of the overlaps is explained in greater detail. Findings - The available literature provides firm evidence that the overlapping factors not only serve to ensure the accessibility of a website for all users, but are also useful for the optimization of the website's search engine ranking. The research demonstrates that any SEO project undertaken should include, as a prerequisite, the proper design of accessible web content, inasmuch as search engines will interpret the web accessibility achieved as an indicator of quality and will be able to better access and index the resulting web content. Originality/value - The present study indicates how developing websites with high visibility in search engine results also makes their content more accessible.This research work has been partially funded by the MA2VICMR (S2009/TIC-1542) and MULTIMEDICA (TIN2010-20644-C03-01) research projects.Publicad

    Smart Search Engine For Information Retrieval

    Get PDF
    This project addresses the main research problem in information retrieval and semantic search. It proposes the smart search theory as new theory based on hypothesis that semantic meanings of a document can be described by a set of keywords. With two experiments designed and carried out in this project, the experiment result demonstrates positive evidence that meet the smart search theory. In the theory proposed in this project, the smart search aims to determine a set of keywords for any web documents, by which the semantic meanings of the documents can be uniquely identified. Meanwhile, the size of the set of keywords is supposed to be small enough which can be easily managed. This is the fundamental assumption for creating the smart semantic search engine. In this project, the rationale of the assumption and the theory based on it will be discussed, as well as the processes of how the theory can be applied to the keyword allocation and the data model to be generated. Then the design of the smart search engine will be proposed, in order to create a solution to the efficiency problem while searching among huge amount of increasing information published on the web. To achieve high efficiency in web searching, statistical method is proved to be an effective way and it can be interpreted from the semantic level. Based on the frequency of joint keywords, the keyword list can be generated and linked to each other to form a meaning structure. A data model is built when a proper keyword list is achieved and the model is applied to the design of the smart search engine

    Webpage Ranking Analysis of Various Search Engines with Special Focus on Country-Specific Search

    Get PDF
    In order to attract many visitors to their own website, it is extremely important for website developers that their webpage is one of the best ranked webpages of search engines. As a rule, search engine operators do not disclose their exact ranking algorithm, so that website developers usually have only vague ideas about which measures have particularly positive influences on the webpage ranking. Conversely, we ask the question: "What are the properties of the best ranked webpages?" For this purpose, we perform a detailed analysis, in which we compare the properties of the best ranked webpages with the worse ranked webpages. Furthermore, we compare countryspecific differences

    A Query-Centric Approach to Supporting the Development of Context-Aware Applications for Mobile Ad Hoc Networks, Doctoral Dissertation, August 2006

    Get PDF
    The wide-spread use of mobile computing devices has led to an increased demand for applications that operate dependably in opportunistically formed networks. A promising approach to supporting software development for such dynamic settings is to rely on the context-aware computing paradigm, in which an application views the state of the surrounding ad hoc network as a valuable source of contextual information that can be used to adapt its behavior. Collecting context information distributed across a constantly changing network remains a significant technical challenge. This dissertation presents a query-centered approach to simplifying context interactions in mobile ad hoc networks. Using such an approach, an application programmer views the surrounding world asa single data repository over which descriptive queries can be issued. Distributed context information appears to be locally available, effectively hiding the complex networking tasks required to acquire context in an open and dynamic setting. This dissertation identifies the research issues associated with developing a query-centric approach and discusses solutions to providing query-centric support to application developers. To promote rapid and dependable software development, a query-centric middleware is provided to the application programmer. These solutions provide the means to reason about the correctness of an application\u27s design and potentially to reduce programmer effort and error

    Multi Agent Systems in Logistics: A Literature and State-of-the-art Review

    Get PDF
    Based on a literature survey, we aim to answer our main question: “How should we plan and execute logistics in supply chains that aim to meet today’s requirements, and how can we support such planning and execution using IT?†Today’s requirements in supply chains include inter-organizational collaboration and more responsive and tailored supply to meet specific demand. Enterprise systems fall short in meeting these requirements The focus of planning and execution systems should move towards an inter-enterprise and event-driven mode. Inter-organizational systems may support planning going from supporting information exchange and henceforth enable synchronized planning within the organizations towards the capability to do network planning based on available information throughout the network. We provide a framework for planning systems, constituting a rich landscape of possible configurations, where the centralized and fully decentralized approaches are two extremes. We define and discuss agent based systems and in particular multi agent systems (MAS). We emphasize the issue of the role of MAS coordination architectures, and then explain that transportation is, next to production, an important domain in which MAS can and actually are applied. However, implementation is not widespread and some implementation issues are explored. In this manner, we conclude that planning problems in transportation have characteristics that comply with the specific capabilities of agent systems. In particular, these systems are capable to deal with inter-organizational and event-driven planning settings, hence meeting today’s requirements in supply chain planning and execution.supply chain;MAS;multi agent systems

    Improving the Reliability of Web Search Results

    Get PDF
    Over the last years, it has been possible to observe the exponential growth of the internet. Everyday new websites are created. Everyday new technologies are developed. Everyday new data is added into the web. The search for available online data on the web has become an increasingly common practice to any person because, the regular user wants to know more. For any existing question or doubt, the user wants the answer the fastest way possible. It is in this field where the search engines are an exceptional tool in helping their users. In order to aid the users reach for what they were seeking for, search engines have become a fantastic tool. Either it is searched for a certain website, some specific information or even for the seek of knowledge, search engines help the user reach his goal. Without their existence, it would be much more difficult and frustrating to find the needed information, which would lead to a tremendous loss of time and resources, and most of the cases, the user would probably not reach the results it was looking for. Thus, the development of web search engines provided a better comfort for the user. However, despite the fact there is a really effective tool, sometimes it can lead to unintended results. Towards a search, the search engine can lead to a suggestion of a website that does not correspond to the expectation of the user. This is due to the fact that search engines only show part of the content related with each correspondent hyperlink, which for several times, users think the answer for what they are looking for is in some website and when they start analysing it, the intended information is not there. Entering and leaving different websites, can be a big inconvenience, even more if the internet connection is slow (as it can happen outside the big cities or in least developed areas), which makes the user lose more time and patience. This dissertation intends to explore the possibility and prove the concept that, with the help and junction of different technologies such as parsing, web crawling, web mining and semantic web in a machine, it is possible to improve the reliability from the search engines, in order for the user lose the minimal time or resources possible
    corecore