1,366 research outputs found

    Distributed, Low-Cost, Non-Expert Fine Dust Sensing with Smartphones

    Get PDF
    Diese Dissertation behandelt die Frage, wie mit kostengünstiger Sensorik Feinstäube in hoher zeitlicher und räumlicher Auflösung gemessen werden können. Dazu wird ein neues Sensorsystem auf Basis kostengünstiger off-the-shelf-Sensoren und Smartphones vorgestellt, entsprechende robuste Algorithmen zur Signalverarbeitung entwickelt und Erkenntnisse zur Interaktions-Gestaltung für die Messung durch Laien präsentiert. Atmosphärische Aerosolpartikel stellen im globalen Maßstab ein gravierendes Problem für die menschliche Gesundheit dar, welches sich in Atemwegs- und Herz-Kreislauf-Erkrankungen äußert und eine Verkürzung der Lebenserwartung verursacht. Bisher wird Luftqualität ausschließlich anhand von Daten relativ weniger fester Messstellen beurteilt und mittels Modellen auf eine hohe räumliche Auflösung gebracht, so dass deren Repräsentativität für die flächendeckende Exposition der Bevölkerung ungeklärt bleibt. Es ist unmöglich, derartige räumliche Abbildungen mit den derzeitigen statischen Messnetzen zu bestimmen. Bei der gesundheitsbezogenen Bewertung von Schadstoffen geht der Trend daher stark zu räumlich differenzierenden Messungen. Ein vielversprechender Ansatz um eine hohe räumliche und zeitliche Abdeckung zu erreichen ist dabei Participatory Sensing, also die verteilte Messung durch Endanwender unter Zuhilfenahme ihrer persönlichen Endgeräte. Insbesondere für Luftqualitätsmessungen ergeben sich dabei eine Reihe von Herausforderungen - von neuer Sensorik, die kostengünstig und tragbar ist, über robuste Algorithmen zur Signalauswertung und Kalibrierung bis hin zu Anwendungen, die Laien bei der korrekten Ausführung von Messungen unterstützen und ihre Privatsphäre schützen. Diese Arbeit konzentriert sich auf das Anwendungsszenario Partizipatorischer Umweltmessungen, bei denen Smartphone-basierte Sensorik zum Messen der Umwelt eingesetzt wird und üblicherweise Laien die Messungen in relativ unkontrollierter Art und Weise ausführen. Die Hauptbeiträge hierzu sind: 1. Systeme zum Erfassen von Feinstaub mit Smartphones (Low-cost Sensorik und neue Hardware): Ausgehend von früher Forschung zur Feinstaubmessung mit kostengünstiger off-the-shelf-Sensorik wurde ein Sensorkonzept entwickelt, bei dem die Feinstaub-Messung mit Hilfe eines passiven Aufsatzes auf einer Smartphone-Kamera durchgeführt wird. Zur Beurteilung der Sensorperformance wurden teilweise Labor-Messungen mit künstlich erzeugtem Staub und teilweise Feldevaluationen in Ko-Lokation mit offiziellen Messstationen des Landes durchgeführt. 2. Algorithmen zur Signalverarbeitung und Auswertung: Im Zuge neuer Sensordesigns werden Kombinationen bekannter OpenCV-Bildverarbeitungsalgorithmen (Background-Subtraction, Contour Detection etc.) zur Bildanalyse eingesetzt. Der resultierende Algorithmus erlaubt im Gegensatz zur Auswertung von Lichtstreuungs-Summensignalen die direkte Zählung von Partikeln anhand individueller Lichtspuren. Ein zweiter neuartiger Algorithmus nutzt aus, dass es bei solchen Prozessen ein signalabhängiges Rauschen gibt, dessen Verhältnis zum Mittelwert des Signals bekannt ist. Dadurch wird es möglich, Signale die von systematischen unbekannten Fehlern betroffen sind auf Basis ihres Rauschens zu analysieren und das "echte" Signal zu rekonstruieren. 3. Algorithmen zur verteilten Kalibrierung bei gleichzeitigem Schutz der Privatsphäre: Eine Herausforderung partizipatorischer Umweltmessungen ist die wiederkehrende Notwendigkeit der Sensorkalibrierung. Dies beruht zum einen auf der Instabilität insbesondere kostengünstiger Luftqualitätssensorik und zum anderen auf der Problematik, dass Endbenutzern die Mittel für eine Kalibrierung üblicherweise fehlen. Bestehende Ansätze zur sogenannten Cross-Kalibrierung von Sensoren, die sich in Ko-Lokation mit einer Referenzstation oder anderen Sensoren befinden, wurden auf Daten günstiger Feinstaubsensorik angewendet sowie um Mechanismen erweitert, die eine Kalibrierung von Sensoren untereinander ohne Preisgabe privater Informationen (Identität, Ort) ermöglicht. 4. Mensch-Maschine-Interaktions-Gestaltungsrichtlinien für Participatory Sensing: Auf Basis mehrerer kleiner explorativer Nutzerstudien wurde empirisch eine Taxonomie der Fehler erstellt, die Laien beim Messen von Umweltinformationen mit Smartphones machen. Davon ausgehend wurden mögliche Gegenmaßnahmen gesammelt und klassifiziert. In einer großen summativen Studie mit einer hohen Teilnehmerzahl wurde der Effekt verschiedener dieser Maßnahmen durch den Vergleich vier unterschiedlicher Varianten einer App zur partizipatorischen Messung von Umgebungslautstärke evaluiert. Die dabei gefundenen Erkenntnisse bilden die Basis für Richtlinien zur Gestaltung effizienter Nutzerschnittstellen für Participatory Sensing auf Mobilgeräten. 5. Design Patterns für Participatory Sensing Games auf Mobilgeräten (Gamification): Ein weiterer erforschter Ansatz beschäftigt sich mit der Gamifizierung des Messprozesses um Nutzerfehler durch den Einsatz geeigneter Spielmechanismen zu minimieren. Dabei wird der Messprozess z.B. in ein Smartphone-Spiel (sog. Minigame) eingebettet, das im Hintergrund bei geeignetem Kontext die Messung durchführt. Zur Entwicklung dieses "Sensified Gaming" getauften Konzepts wurden Kernaufgaben im Participatory Sensing identifiziert und mit aus der Literatur zu sammelnden Spielmechanismen (Game Design Patterns) gegenübergestellt

    SmartAQnet 2020: A New Open Urban Air Quality Dataset from Heterogeneous PM Sensors

    Get PDF
    The increasing attention paid to urban air quality modeling places higher requirements on urban air quality datasets. This article introduces a new urban air quality dataset—the SmartAQnet2020 dataset—which has a large span and high resolution in both time and space dimensions. The dataset contains 248,572,003 observations recorded by over 180 individual measurement devices, including ceilometers, Radio Acoustic Sounding System (RASS), mid- and low-cost stationary measuring equipment equipped with meteorological sensors and particle counters, and low-weight portable measuring equipment mounted on different platforms such as trolley, bike, and UAV

    On the use of autonomous unmanned vehicles in response to hazardous atmospheric release incidents

    Get PDF
    Recent events have induced a surge of interest in the methods of response to releases of hazardous materials or gases into the atmosphere. In the last decade there has been particular interest in mapping and quantifying emissions for regulatory purposes, emergency response, and environmental monitoring. Examples include: responding to events such as gas leaks, nuclear accidents or chemical, biological or radiological (CBR) accidents or attacks, and even exploring sources of methane emissions on the planet Mars. This thesis presents a review of the potential responses to hazardous releases, which includes source localisation, boundary tracking, mapping and source term estimation. [Continues.]</div

    Signals and Images in Sea Technologies

    Get PDF
    Life below water is the 14th Sustainable Development Goal (SDG) envisaged by the United Nations and is aimed at conserving and sustainably using the oceans, seas, and marine resources for sustainable development. It is not difficult to argue that signals and image technologies may play an essential role in achieving the foreseen targets linked to SDG 14. Besides increasing the general knowledge of ocean health by means of data analysis, methodologies based on signal and image processing can be helpful in environmental monitoring, in protecting and restoring ecosystems, in finding new sensor technologies for green routing and eco-friendly ships, in providing tools for implementing best practices for sustainable fishing, as well as in defining frameworks and intelligent systems for enforcing sea law and making the sea a safer and more secure place. Imaging is also a key element for the exploration of the underwater world for various scopes, ranging from the predictive maintenance of sub-sea pipelines and other infrastructure projects, to the discovery, documentation, and protection of sunken cultural heritage. The scope of this Special Issue encompasses investigations into techniques and ICT approaches and, in particular, the study and application of signal- and image-based methods and, in turn, exploration of the advantages of their application in the previously mentioned areas

    Situation inference and context recognition for intelligent mobile sensing applications

    Get PDF
    The usage of smart devices is an integral element in our daily life. With the richness of data streaming from sensors embedded in these smart devices, the applications of ubiquitous computing are limitless for future intelligent systems. Situation inference is a non-trivial issue in the domain of ubiquitous computing research due to the challenges of mobile sensing in unrestricted environments. There are various advantages to having robust and intelligent situation inference from data streamed by mobile sensors. For instance, we would be able to gain a deeper understanding of human behaviours in certain situations via a mobile sensing paradigm. It can then be used to recommend resources or actions for enhanced cognitive augmentation, such as improved productivity and better human decision making. Sensor data can be streamed continuously from heterogeneous sources with different frequencies in a pervasive sensing environment (e.g., smart home). It is difficult and time-consuming to build a model that is capable of recognising multiple activities. These activities can be performed simultaneously with different granularities. We investigate the separability aspect of multiple activities in time-series data and develop OPTWIN as a technique to determine the optimal time window size to be used in a segmentation process. As a result, this novel technique reduces need for sensitivity analysis, which is an inherently time consuming task. To achieve an effective outcome, OPTWIN leverages multi-objective optimisation by minimising the impurity (the number of overlapped windows of human activity labels on one label space over time series data) while maximising class separability. The next issue is to effectively model and recognise multiple activities based on the user&#039;s contexts. Hence, an intelligent system should address the problem of multi-activity and context recognition prior to the situation inference process in mobile sensing applications. The performance of simultaneous recognition of human activities and contexts can be easily affected by the choices of modelling approaches to build an intelligent model. We investigate the associations of these activities and contexts at multiple levels of mobile sensing perspectives to reveal the dependency property in multi-context recognition problem. We design a Mobile Context Recognition System, which incorporates a Context-based Activity Recognition (CBAR) modelling approach to produce effective outcome from both multi-stage and multi-target inference processes to recognise human activities and their contexts simultaneously. Upon our empirical evaluation on real-world datasets, the CBAR modelling approach has significantly improved the overall accuracy of simultaneous inference on transportation mode and human activity of mobile users. The accuracy of activity and context recognition can also be influenced progressively by how reliable user annotations are. Essentially, reliable user annotation is required for activity and context recognition. These annotations are usually acquired during data capture in the world. We research the needs of reducing user burden effectively during mobile sensor data collection, through experience sampling of these annotations in-the-wild. To this end, we design CoAct-nnotate --- a technique that aims to improve the sampling of human activities and contexts by providing accurate annotation prediction and facilitates interactive user feedback acquisition for ubiquitous sensing. CoAct-nnotate incorporates a novel multi-view multi-instance learning mechanism to perform more accurate annotation prediction. It also includes a progressive learning process (i.e., model retraining based on co-training and active learning) to improve its predictive performance over time. Moving beyond context recognition of mobile users, human activities can be related to essential tasks that the users perform in daily life. Conversely, the boundaries between the types of tasks are inherently difficult to establish, as they can be defined differently from the individuals&#039; perspectives. Consequently, we investigate the implication of contextual signals for user tasks in mobile sensing applications. To define the boundary of tasks and hence recognise them, we incorporate such situation inference process (i.e., task recognition) into the proposed Intelligent Task Recognition (ITR) framework to learn users&#039; Cyber-Physical-Social activities from their mobile sensing data. By recognising the engaged tasks accurately at a given time via mobile sensing, an intelligent system can then offer proactive supports to its user to progress and complete their tasks. Finally, for robust and effective learning of mobile sensing data from heterogeneous sources (e.g., Internet-of-Things in a mobile crowdsensing scenario), we investigate the utility of sensor data in provisioning their storage and design QDaS --- an application agnostic framework for quality-driven data summarisation. This allows an effective data summarisation by performing density-based clustering on multivariate time series data from a selected source (i.e., data provider). Thus, the source selection process is determined by the measure of data quality. Nevertheless, this framework allows intelligent systems to retain comparable predictive results by its effective learning on the compact representations of mobile sensing data, while having a higher space saving ratio. This thesis contains novel contributions in terms of the techniques that can be employed for mobile situation inference and context recognition, especially in the domain of ubiquitous computing and intelligent assistive technologies. This research implements and extends the capabilities of machine learning techniques to solve real-world problems on multi-context recognition, mobile data summarisation and situation inference from mobile sensing. We firmly believe that the contributions in this research will help the future study to move forward in building more intelligent systems and applications

    A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors

    Get PDF
    Understanding atmospheric transport and dispersal events has an important role in a range of scenarios. Of particular importance is aiding in emergency response after an intentional or accidental chemical, biological or radiological (CBR) release. In the event of a CBR release, it is desirable to know the current and future spatial extent of the contaminant as well as its location in order to aid decision makers in emergency response. Many dispersion phenomena may be opaque or clear, thus monitoring them using visual methods will be difficult or impossible. In these scenarios, relevant concentration sensors are required to detect the substance where they can form a static network on the ground or be placed upon mobile platforms. This paper presents a review of techniques used to gain information about atmospheric dispersion events using static or mobile sensors. The review is concluded with a discussion on the current limitations of the state of the art and recommendations for future research

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Air Force Institute of Technology Research Report 2012

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems and Engineering Management, Operational Sciences, Mathematics, Statistics and Engineering Physics
    • …
    corecore