2,781 research outputs found
A Semantic Wiki-based Platform for IT Service Management
The book researches the use of a semantic wiki in the area of IT Service Management within the IT department of an SME. An emphasis of the book lies in the design and prototypical implementation of tools for the integration of ITSM-relevant information into the semantic wiki, as well as tools for interactions between the wiki and external programs. The result of the book is a platform for agile, semantic wiki-based ITSM for IT administration teams of SMEs
Engineering Agile Big-Data Systems
To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems
Community-Driven Engineering of the DBpedia Infobox Ontology and DBpedia Live Extraction
The DBpedia project aims at extracting information based on semi-structured data present in Wikipedia articles, interlinking it with other knowledge bases, and publishing this information as RDF freely on the Web. So far, the DBpedia project has succeeded in creating one of the largest knowledge bases on the Data Web, which is used in many applications and research prototypes. However, the manual effort required to produce and publish a new version of the dataset â which was already partially outdated the moment it was released â has been a drawback. Additionally, the maintenance of the DBpedia Ontology, an ontology serving as a structural backbone for the extracted data, made the release cycles even more heavyweight. In the course of this thesis, we make two contributions: Firstly, we develop a wiki-based solution for maintaining the DBpedia Ontology. By allowing anyone to edit, we aim to distribute the maintenance work among the DBpedia community. Secondly, we extend DBpedia with a Live Extraction Framework, which is capable of extracting RDF data from articles that have recently been edited on the English Wikipedia. By making this RDF data automatically public in near realtime, namely via SPARQL and Linked Data, we overcome many of the drawbacks of the former release cycles
Engineering Agile Big-Data Systems
To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems
Simple identification tools in FishBase
Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further
development. It explores the possibility of a holistic and integrated computeraided strategy
Recommended from our members
The National Transport Data Framework
Report by Professor Peter Landshoff (Cambridge University) and
Professor John Polak (Imperial College London) on a project for
the Department for Transport.
emails: [email protected] [email protected] NTDF is designed to be a resource for data owners to deposit descriptions
into a central catalogue, so that people can search for data and find data
and understand their characteristics. The value of this is to individuals, to
commercial organizations, and to public bodies. For example, services that
provide better information to travellers will help to make their journey
less stressful and persuade them to make more use of public transport.
Transport operators need very diverse information to help them
plan developments to their services: demographic, geographical, economic etc.
And policy makers need a similar range of information to help them decide
how to divide their budget and afterwards to evaluate how valuable it has
been.This work was supported by the Department for Transport (DfT)
Design of an E-learning system using semantic information and cloud computing technologies
Humanity is currently suffering from many difficult problems that threaten the life and survival of the human race. It is very easy for all mankind to be affected, directly or indirectly, by these problems. Education is a key solution for most of them. In our thesis we tried to make use of current technologies to enhance and ease the learning process.
We have designed an e-learning system based on semantic information and cloud computing, in addition to many other technologies that contribute to improving the educational process and raising the level of students. The design was built after much research on useful technology, its types, and examples of actual systems that were previously discussed by other researchers.
In addition to the proposed design, an algorithm was implemented to identify topics found in large textual educational resources. It was tested and proved to be efficient against other methods. The algorithm has the ability of extracting the main topics from textual learning resources, linking related resources and generating interactive dynamic knowledge graphs. This algorithm accurately and efficiently accomplishes those tasks even for bigger books. We used Wikipedia Miner, TextRank, and Gensim within our algorithm. Our algorithmâs accuracy was evaluated against Gensim, largely improving its accuracy.
Augmenting the system design with the implemented algorithm will produce many useful services for improving the learning process such as: identifying main topics of big textual learning resources automatically and connecting them to other well defined concepts from Wikipedia, enriching current learning resources with semantic information from external sources, providing student with browsable dynamic interactive knowledge graphs, and making use of learning groups to encourage students to share their learning experiences and feedback with other learners.Programa de Doctorado en IngenierĂa TelemĂĄtica por la Universidad Carlos III de MadridPresidente: Luis SĂĄnchez FernĂĄndez.- Secretario: Luis de la Fuente ValentĂn.- Vocal: Norberto FernĂĄndez GarcĂ
Linked Data based Health Information Representation, Visualization and Retrieval System on the Semantic Web
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.To better facilitate health information dissemination, using flexible ways to
represent, query and visualize health data becomes increasingly important.
Semantic Web technologies, which provide a common framework by
allowing data to be shared and reused between applications, can be applied
to the management of health data. Linked open data - a new semantic web
standard to publish and link heterogonous data- allows not only human,
but also machine to brows data in unlimited way.
Through a use case of world health organization HIV data of sub Saharan
Africa - which is severely affected by HIV epidemic, this thesis built a
linked data based health information representation, querying and
visualization system. All the data was represented with RDF, by
interlinking it with other related datasets, which are already on the cloud.
Over all, the system have more than 21,000 triples with a SPARQL
endpoint; where users can download and use the data and â a SPARQL
query interface where users can put different type of query and retrieve the
result. Additionally, It has also a visualization interface where users can
visualize the SPARQL result with a tool of their preference. For users who
are not familiar with SPARQL queries, they can use the linked data search
engine interface to search and browse the data.
From this system we can depict that current linked open data technologies
have a big potential to represent heterogonous health data in a flexible and
reusable manner and they can serve in intelligent queries, which can
support decision-making. However, in order to get the best from these
technologies, improvements are needed both at the level of triple stores
performance and domain-specific ontological vocabularies
- âŚ