2,059 research outputs found
Web Data Extraction, Applications and Techniques: A Survey
Web Data Extraction is an important problem that has been studied by means of
different scientific tools and in a broad range of applications. Many
approaches to extracting data from the Web have been designed to solve specific
problems and operate in ad-hoc domains. Other approaches, instead, heavily
reuse techniques and algorithms developed in the field of Information
Extraction.
This survey aims at providing a structured and comprehensive overview of the
literature in the field of Web Data Extraction. We provided a simple
classification framework in which existing Web Data Extraction applications are
grouped into two main classes, namely applications at the Enterprise level and
at the Social Web level. At the Enterprise level, Web Data Extraction
techniques emerge as a key tool to perform data analysis in Business and
Competitive Intelligence systems as well as for business process
re-engineering. At the Social Web level, Web Data Extraction techniques allow
to gather a large amount of structured data continuously generated and
disseminated by Web 2.0, Social Media and Online Social Network users and this
offers unprecedented opportunities to analyze human behavior at a very large
scale. We discuss also the potential of cross-fertilization, i.e., on the
possibility of re-using Web Data Extraction techniques originally designed to
work in a given domain, in other domains.Comment: Knowledge-based System
Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)
This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio
CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap
After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in
multimedia search engines, we have identified and analyzed gaps within European research effort during our second year.
In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio-
economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown
of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on
requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the
community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our
Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as
National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core
technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research
challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal
challenges
Big Data Computing for Geospatial Applications
The convergence of big data and geospatial computing has brought forth challenges and opportunities to Geographic Information Science with regard to geospatial data management, processing, analysis, modeling, and visualization. This book highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates opportunities for using big data for geospatial applications. Crucial to the advancements highlighted in this book is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms
3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets
This research deals with the issues concerning the processing, managing, representation
for further dissemination of the big amount of 3D data today achievable and storable with
the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused
on the optimization process applied to 3D photogrammetric data of Cultural Heritage
assets.
Modern Geomatic techniques enable the acquisition and storage of a big amount of data,
with high metric and radiometric accuracy and precision, also in the very close range
field, and to process very detailed 3D textured models. Nowadays, the photogrammetric
pipeline has well-established potentialities and it is considered one of the principal
technique to produce, at low cost, detailed 3D textured models.
The potentialities offered by high resolution and textured 3D models is today well-known
and such representations are a powerful tool for many multidisciplinary purposes, at
different scales and resolutions, from documentation, conservation and restoration to
visualization and education. For example, their sub-millimetric precision makes them
suitable for scientific studies applied to the geometry and materials (i.e. for structural and
static tests, for planning restoration activities or for historical sources); their high fidelity
to the real object and their navigability makes them optimal for web-based visualization
and dissemination applications. Thanks to the improvement made in new visualization
standard, they can be easily used as visualization interface linking different kinds of
information in a highly intuitive way. Furthermore, many museums look today for more
interactive exhibitions that may increase the visitors’ emotions and many recent
applications make use of 3D contents (i.e. in virtual or augmented reality applications and
through virtual museums).
What all of these applications have to deal with concerns the issue deriving from the
difficult of managing the big amount of data that have to be represented and navigated.
Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them
difficult to be handled by common and portable devices, published on the internet or
managed in real time applications. Even though recent advances produce more and more
sophisticated and capable hardware and internet standards, empowering the ability to
easily handle, visualize and share such contents, other researches aim at define a common
pipeline for the generation and optimization of 3D models with a reduced number of
polygons, however able to satisfy detailed radiometric and geometric requests.
iii
This thesis is inserted in this scenario and focuses on the 3D modeling process of
photogrammetric data aimed at their easy sharing and visualization. In particular, this
research tested a 3D models optimization, a process which aims at the generation of Low
Polygons models, with very low byte file size, processed starting from the data of High
Poly ones, that nevertheless offer a level of detail comparable to the original models. To
do this, several tools borrowed from the game industry and game engine have been used.
For this test, three case studies have been chosen, a modern sculpture of a contemporary
Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of
Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont-
Italy). All the test cases have been surveyed by means of a close range photogrammetric
acquisition and three high detailed 3D models have been generated by means of a
Structure from Motion and image matching pipeline. On the final High Poly models
generated, different optimization and decimation tools have been tested with the final aim
to evaluate the quality of the information that can be extracted by the final optimized
models, in comparison to those of the original High Polygon one. This study showed how
tools borrowed from the Computer Graphic offer great potentialities also in the Cultural
Heritage field. This application, in fact, may meet the needs of multipurpose and
multiscale studies, using different levels of optimization, and this procedure could be
applied to different kind of objects, with a variety of different sizes and shapes, also on
multiscale and multisensor data, such as buildings, architectural complexes, data from
UAV surveys and so on
Marine Vessel Inspection as a Novel Field for Service Robotics: A Contribution to Systems, Control Methods and Semantic Perception Algorithms.
This cumulative thesis introduces a novel field for service robotics: the inspection of marine vessels using mobile inspection robots. In this thesis, three scientific contributions are provided and experimentally verified in the field of marine inspection, but are not limited to this type of application. The inspection scenario is merely a golden thread to combine the cumulative scientific results presented in this thesis. The first contribution is an adaptive, proprioceptive control approach for hybrid leg-wheel robots, such as the robot ASGUARD described in this thesis. The robot is able to deal with rough terrain and stairs, due to the control concept introduced in this thesis. The proposed system is a suitable platform to move inside the cargo holds of bulk carriers and to deliver visual data from inside the hold. Additionally, the proposed system also has stair climbing abilities, allowing the system to move between different decks. The robot adapts its gait pattern dynamically based on proprioceptive data received from the joint motors and based on the pitch and tilt angle of the robot's body during locomotion. The second major contribution of the thesis is an independent ship inspection system, consisting of a magnetic wall climbing robot for bulkhead inspection, a particle filter based localization method, and a spatial content management system (SCMS) for spatial inspection data representation and organization. The system described in this work was evaluated in several laboratory experiments and field trials on two different marine vessels in close collaboration with ship surveyors. The third scientific contribution of the thesis is a novel approach to structural classification using semantic perception approaches. By these methods, a structured environment can be semantically annotated, based on the spatial relationships between spatial entities and spatial features. This method was verified in the domain of indoor perception (logistics and household environment), for soil sample classification, and for the classification of the structural parts of a marine vessel. The proposed method allows the description of the structural parts of a cargo hold in order to localize the inspection robot or any detected damage. The algorithms proposed in this thesis are based on unorganized 3D point clouds, generated by a LIDAR within a ship's cargo hold. Two different semantic perception methods are proposed in this thesis. One approach is based on probabilistic constraint networks; the second approach is based on Fuzzy Description Logic and spatial reasoning using a spatial ontology about the environment
Applications of Virtual Reality
Information Technology is growing rapidly. With the birth of high-resolution graphics, high-speed computing and user interaction devices Virtual Reality has emerged as a major new technology in the mid 90es, last century. Virtual Reality technology is currently used in a broad range of applications. The best known are games, movies, simulations, therapy. From a manufacturing standpoint, there are some attractive applications including training, education, collaborative work and learning. This book provides an up-to-date discussion of the current research in Virtual Reality and its applications. It describes the current Virtual Reality state-of-the-art and points out many areas where there is still work to be done. We have chosen certain areas to cover in this book, which we believe will have potential significant impact on Virtual Reality and its applications. This book provides a definitive resource for wide variety of people including academicians, designers, developers, educators, engineers, practitioners, researchers, and graduate students
- …