695 research outputs found

    From Sensor to Observation Web with Environmental Enablers in the Future Internet

    Get PDF
    This paper outlines the grand challenges in global sustainability research and the objectives of the FP7 Future Internet PPP program within the Digital Agenda for Europe. Large user communities are generating significant amounts of valuable environmental observations at local and regional scales using the devices and services of the Future Internet. These communities’ environmental observations represent a wealth of information which is currently hardly used or used only in isolation and therefore in need of integration with other information sources. Indeed, this very integration will lead to a paradigm shift from a mere Sensor Web to an Observation Web with semantically enriched content emanating from sensors, environmental simulations and citizens. The paper also describes the research challenges to realize the Observation Web and the associated environmental enablers for the Future Internet. Such an environmental enabler could for instance be an electronic sensing device, a web-service application, or even a social networking group affording or facilitating the capability of the Future Internet applications to consume, produce, and use environmental observations in cross-domain applications. The term ?envirofied? Future Internet is coined to describe this overall target that forms a cornerstone of work in the Environmental Usage Area within the Future Internet PPP program. Relevant trends described in the paper are the usage of ubiquitous sensors (anywhere), the provision and generation of information by citizens, and the convergence of real and virtual realities to convey understanding of environmental observations. The paper addresses the technical challenges in the Environmental Usage Area and the need for designing multi-style service oriented architecture. Key topics are the mapping of requirements to capabilities, providing scalability and robustness with implementing context aware information retrieval. Another essential research topic is handling data fusion and model based computation, and the related propagation of information uncertainty. Approaches to security, standardization and harmonization, all essential for sustainable solutions, are summarized from the perspective of the Environmental Usage Area. The paper concludes with an overview of emerging, high impact applications in the environmental areas concerning land ecosystems (biodiversity), air quality (atmospheric conditions) and water ecosystems (marine asset management)

    Patterns of mobility in a smart city

    Get PDF
    Transportation data in smart cities is becoming increasingly available. This data allows building meaningful, intelligent solutions for city residents and city management authorities, the so-called Intelligent Transportation Systems. Our research focused on Lisbon mobility data, provided by Lisbon municipality. The main research objective was to address mobility problems, interdependence, and cascading effects solutions for the city of Lisbon. We developed a data-driven approach based on historical data with a strong focus on visualization methods and dashboard creation. Also, we applied a method based on time series to do prediction based on the traffic congestion data provided. A CRISP-DM approach was applied, integrating different data sources, using Python. Hence, understand traffic patterns, and help the city authorities in the decision-making process, namely more preparedness, adaptability, responsiveness to events.Os dados de transporte, no Ăąmbito das cidades inteligentes, estĂŁo cada vez mais disponĂ­veis. Estes dados permitem a construção de soluçÔes inteligentes com impacto significativo na vida dos residentes e nos mecanismos das autoridades de gestĂŁo da cidade, os chamados Sistemas de Transporte Inteligentes. A nossa investigação incidiu sobre os dados de mobilidade urbana da cidade de Lisboa, disponibilizados pelo municĂ­pio. O principal objetivo da pesquisa foi abordar os problemas de mobilidade, interdependĂȘncia e soluçÔes de efeitos em cascata para a cidade de Lisboa. Para alcançar este objetivo foi desenvolvida uma metodologia baseada nos dados histĂłricos do transito no centro urbano da cidade e principais acessos, com uma forte componente de visualização. Foi tambĂ©m aplicado um mĂ©todo baseado em series temporais para fazer a previsĂŁo das ocorrĂȘncias de transito na cidade de Lisboa. Foi aplicada uma abordagem CRISP-DM, integrando diferentes fontes de dados, utilizando Python. Esta tese tem como objetivo identificar padrĂ”es de mobilidade urbana com anĂĄlise e visualização de dados, de forma a auxiliar as autoridades municipais no processo de tomada de decisĂŁo, nomeadamente estar mais preparada, adaptada e responsiva

    Development of a spatial data infrastructure for precision agriculture applications

    Get PDF
    Precision agriculture (PA) is the technical answer to tackling heterogeneous conditions in a field. It works through site specific operations on a small scale and is driven by data. The objective is an optimized agricultural field application that is adaptable to local needs. The needs differ within a task by spatial conditions. A field, as a homogenous-planted unit, exceeds by its size the scale units of different landscape ecological properties, like soil type, slope, moisture content, solar radiation etc. Various PA-sensors sample data of the heterogeneous conditions in a field. PA-software and Farm Management Information Systems (FMIS) transfer the data into status information or application instructions, which are optimized for the local conditions. The starting point of the research was the determination that the process of PA was only being used in individual environments without exchange between different users and to other domains. Data have been sampled regarding specific operations, but the model of PA suffers from these closed data streams and software products. Initial sensors, data processing and controlled implementations were constructed and sold as monolithic application. An exchange of hard- or software as well as of data was not planned. The design was focused on functionality in a fixed surrounding and conceived as being a unit. This has been identified as a disadvantage for ongoing developments and the creation of added value. Influences from the outside that may be innovative or even inspired cannot be considered. To make this possible, the underlying infrastructure must be flexible and optimized for the exchange of data. This thesis explores the necessary data handling, in terms of integrating knowledge of other domains with a focus on the geo-spatial data processing. As PA is largely dependent on geographical data, this work develops spatial data infrastructure (SDI) components and is based on the methods and tools of geo-informatics. An SDI provides concepts for the organization of geospatial components. It consists of spatial- and metadata in geospatial workflows. The SDI in the center of these workflows is implemented by technologies, policies, arrangements, and interfaces to make the data accessible for various users. Data exchange is the major aim of the concept. As previously stated, data exchange is necessary for PA operations, and it can benefit from defined components of an SDI. Furthermore, PA-processes gain access to interchange with other domains. The import of additional, external data is a benefit. Simultaneously, an export interface for agricultural data offers new possibilities. Coordinated communication ensures understanding for each participant. From the technological point of view, standardized interfaces are best practice. This work demonstrates the benefit of a standardized data exchange for PA, by using the standards of the Open Geospatial Consortium (OGC). The OGC develops and publishes a wide range of relevant standards, which are widely adopted in geospatially enabled software. They are practically proven in other domains and were implemented partially in FMIS in the recent years. Depending on their focus, they could support software solutions by incorporating additional information for humans or machines into additional logics and algorithms. This work demonstrates the benefits of standardized data exchange for PA, especially by the standards of the OGC. The process of research follows five objectives: (i) to increase the usability of PA-tools in order to open the technology for a wider group of users, (ii) to include external data and services seamlessly through standardized interfaces to PA-applications, (iii) to support exchange with other domains concerning data and technology, (iv) to create a modern PA-software architecture, which allows new players and known brands to support processes in PA and to develop new business segments, (v) to use IT-technologies as a driver for agriculture and to contribute to the digitalization of agriculture.Precision agriculture (PA) ist die technische Antwort, um heterogenen Bedingungen in einem Feld zu begegnen. Es arbeitet mit teilflĂ€chenspezifischen Handlungen kleinrĂ€umig und ist durch Daten angetrieben. Das Ziel ist die optimierte landwirtschaftliche Feldanwendung, welche an die lokalen Gegebenheiten angepasst wird. Die BedĂŒrfnisse unterscheiden sich innerhalb einer Anwendung in den rĂ€umlichen Bedingungen. Ein Feld, als gleichmĂ€ĂŸig bepflanzte Einheit, ĂŒberschreitet in seiner GrĂ¶ĂŸe die rĂ€umlichen Einheiten verschiedener landschaftsökologischer GrĂ¶ĂŸen, wie den Bodentyp, die Hangneigung, den Feuchtigkeitsgehalt, die Sonneneinstrahlung etc. Unterschiedliche Sensoren sammeln Daten zu den heterogenen Bedingungen im Feld. PA-Software und farm management information systems (FMIS) ĂŒberfĂŒhren die Daten in Statusinformationen oder Bearbeitungsanweisungen, die fĂŒr die Bedingungen am Ort optimiert sind. Ausgangspunkt dieser Dissertation war die Feststellung, dass der Prozess innerhalb von PA sich nur in einer individuellen Umgebung abspielte, ohne dass es einen Austausch zwischen verschiedenen Nutzern oder anderen DomĂ€nen gab. Daten wurden gezielt fĂŒr Anwendungen gesammelt, aber das Modell von PA leidet unter diesen geschlossenen Datenströmen und Softwareprodukten. UrsprĂŒnglich wurden Sensoren, die Datenverarbeitung und die Steuerung von AnbaugerĂ€ten konstruiert und als monolithische Anwendung verkauft. Ein Austausch von Hard- und Software war ebenso nicht vorgesehen wie der von Daten. Das Design war auf Funktionen in einer festen Umgebung ausgerichtet und als eine Einheit konzipiert. Dieses zeigte sich als Nachteil fĂŒr weitere Entwicklungen und bei der Erzeugung von Mehrwerten. Äußere innovative oder inspirierende EinflĂŒsse können nicht berĂŒcksichtigt werden. Um dieses zu ermöglichen muss die darunterliegende Infrastruktur flexibel und auf einen Austausch von Daten optimiert sein. Diese Dissertation erkundet die notwendige Datenverarbeitung im Sinne der Integration von Wissen aus anderen Bereichen mit dem Fokus auf der Verarbeitung von Geodaten. Da PA sehr abhĂ€ngig von geographischen Daten ist, werden in dieser Arbeit die Bausteine einer Geodateninfrastruktur (GDI) entwickelt, die auf den Methoden undWerkzeugen der Geoinformatik beruhen. Eine GDI stellt Konzepte zur Organisation rĂ€umlicher Komponenten. Sie besteht aus Geodaten und Metadaten in raumbezogenen Arbeitsprozessen. Die GDI, als Zentrum dieser Arbeitsprozesse, wird mit Technologien, Richtlinien, Regelungen sowie Schnittstellen, die den Zugriff durch unterschiedliche Nutzer ermöglichen, umgesetzt. Datenaustausch ist das Hauptziel des Konzeptes. Wie bereits erwĂ€hnt, ist der Datenaustausch wichtig fĂŒr PA-TĂ€tigkeiten und er kann von den definierten Komponenten einer GDI profitieren. Ferner bereichert der Austausch mit anderen Gebieten die PA-Prozesse. Der Import zusĂ€tzlicher Daten ist daher ein Gewinn. Gleichzeitig bietet eine Export-Schnittstelle fĂŒr landwirtschaftliche Daten neue Möglichkeiten. Koordinierte Kommunikation sichert das VerstĂ€ndnis fĂŒr jeden Teilnehmer. Aus technischer Sicht sind standardisierte Schnittstellen die beste Lösung. Diese Arbeit zeigt den Gewinn durch einen standardisierten Datenaustausch fĂŒr PA, indem die Standards des Open Geospatial Consortium (OGC) genutzt wurden. Der OGC entwickelt und publiziert eine Vielzahl von relevanten Standards, die eine große Reichweite in Geo-Software haben. Sie haben sich in der Praxis anderer Bereiche bewĂ€hrt und wurden in den letzten Jahren teilweise in FMIS eingesetzt. AbhĂ€ngig von ihrer Ausrichtung könnten sie Softwarelösungen unterstĂŒtzen, indem sie zusĂ€tzliche Informationen fĂŒr Menschen oder Maschinen in zusĂ€tzlicher Logik oder Algorithmen integrieren. Diese Arbeit zeigt die VorzĂŒge eines standardisierten Datenaustauschs fĂŒr PA, insbesondere durch die Standards des OGC. Die Ziele der Forschung waren: (i) die Nutzbarkeit von PA-Werkzeugen zu erhöhen und damit die Technologie einer breiteren Gruppe von Anwendern verfĂŒgbar zu machen, (ii) externe Daten und Dienste ohne Unterbrechung sowie ĂŒber standardisierte Schnittstellen fĂŒr PA-Anwendungen einzubeziehen, (iii) den Austausch mit anderen Bereichen im Bezug auf Daten und Technologien zu unterstĂŒtzen, (iv) eine moderne PA-Softwarearchitektur zu erschaffen, die es neuen Teilnehmern und bekannten Marken ermöglicht, Prozesse in PA zu unterstĂŒtzen und neue GeschĂ€ftsfelder zu entwickeln, (v) IT-Technologien als Antrieb fĂŒr die Landwirtschaft zu nutzen und einen Beitrag zur Digitalisierung der Landwirtschaft zu leisten

    04441 Abstracts Collection -- Mobile Information Management

    Get PDF
    From 24.10.04 to 29.10.04, the Dagstuhl Seminar 04441 ``Mobile Information Management\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    The Many Faces of Edge Intelligence

    Get PDF
    Edge Intelligence (EI) is an emerging computing and communication paradigm that enables Artificial Intelligence (AI) functionality at the network edge. In this article, we highlight EI as an emerging and important field of research, discuss the state of research, analyze research gaps and highlight important research challenges with the objective of serving as a catalyst for research and innovation in this emerging area. We take a multidisciplinary view to reflect on the current research in AI, edge computing, and communication technologies, and we analyze how EI reflects on existing research in these fields. We also introduce representative examples of application areas that benefit from, or even demand the use of EI.Peer reviewe

    Vehicle trajectory prediction for safe navigation of autonomous vehicles

    Get PDF
    Trajectory prediction of the other road users in the vicinity of an autonomous vehicle is important for safe navigation in dense traffic. Once an autonomous vehicle anticipates how the other road actors will react in the near future, path planning is a lot more simpler and safer. Moreover, the knowledge of future movement of other road actors allows control of sudden jerks in the planned ego vehicle’s path and thus makes travel smoother. This trajectory prediction stage can be used at any level, from restricted driver assistance to full vehicle autonomy. In this thesis two novel trajectory prediction models have been developed. In the first model, the spatio-temporal features that form the basis of behaviour prediction were captured using a Convolutional Long Short Term Memory (Conv-LSTM) neural network architecture consisting of three modules: 1) Interaction Learning to capture the motion of and interaction with surrounding cars, 2) Temporal Learning to identify the dependency on past movements and 3) Motion Learning to convert the extracted features from these two modules into future positions. In addition, a novel feedback scheme was introduced in which the current predicted positions of each car are leveraged to update future motion, encapsulating the effect of the surrounding cars. In the second model a conventional Long Short Term Memory (LSTM) cell based encoder-decoder architecture was developed which uses not only the historical observations but also the associated map features. Moreover, unlike existing architectures, the proposed method incorporates and updates the surrounding vehicle information in both the encoder and decoder, making use of dynamically predicted new data for accurate prediction in longer time horizons. This seamlessly performs four tasks: first, it encodes a feature given the past observations, second, it estimates future maneuvers given the encoded state, third, it predicts the future motion given the estimated maneuvers and the initially encoded states, and fourth, it estimates future trajectory given the encoded state and the predicted maneuvers and motions. Both the developed models were evaluated extensively on two publicly available datasets which include both multi-lane highway and signalled intersections, to benchmark the prediction accuracy with the state-of-the-art models. Later, the conventional encoder-decoder model was also evaluated with a newly collected “Radiate” dataset which includes two intersections, the Kingussie T-junction and the Edinburgh four-way junction, both without traffic signals. The accuracy of the predicted trajectories on the benchmark datasets are comparable with state-of-the-art methods. Moreover, evaluation on the latter dataset (“Radiate”) made it possible to understand better the effect of inter-vehicle interactions on future motion without any influence from mandatory traffic signals.Engineering and Physical Sciences Research Council (EPSRC) funding

    An Integrative Analytical Framework for Internet of Things Security, Forensics and Intelligence

    Full text link
    The Internet of things (IoT) has recently become an important research topic because it revolutionises our everyday life through integrating various sensors and objects to communicate directly without human intervention. IoT technology is expected to offer very promising solutions for many areas. In this thesis we focused on the crime investigation and crime prevention, which may significantly contribute to human well-being and safety. Our primary goals are to reduce the time of crime investigation, minimise the time of incident response and to prevent future crimes using collected data from smart devices. This PhD thesis consists of three distinct but related projects to reach the research goal. The main contributions can be summarised as: ‱ A multi-level access control framework, presented in Chapter 3. This could be used to secure any collected and shared data. We decided to have this as our first contribution as it is not realistic to use data that could be altered in our prediction model or as evidence. We chose healthcare data collected from ambient sensors and uploaded to cloud storage as an example for our framework as this data is collected from multiple sources and is used by different parties. The access control system regulates access to data by defining policy attributes over healthcare professional groups and data classes classifications. The proposed access control system contains policy model, architecture model and a methodology to classify data classes and healthcare professional groups. ‱ An investigative framework, that was discussed in Chapter 4, which contains a multi-phased process flow that coordinates different roles and tasks in IoT related-crime investigation. The framework identifies digital information sources and captures all potential evidence from smart devices in a way that guarantee potential evidence is not altered so it can be admissible in a court of law. ‱ A deep learning multi-view model, which we demonstrated in Chapter 5, that explores the relationship between tweets, weather (a type of sensory data) and crime rate, for effective crime prediction. This contribution is motivated by the need to utilise police force deployment correctly to be present at the right times. Both the proposed investigative framework and the predictive model were evaluated and tested, and the results of these evaluations are presented in the thesis. The proposed framework and model contribute significantly to the field of crime investigation and crime prediction. We believe their application would provide higher admissibility evidence, more efficient investigations, and optimum ways to utilise law enforcement deployment based on crime rate prediction using collected sensory data

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included
    • 

    corecore