199 research outputs found
Distributed, Low-Cost, Non-Expert Fine Dust Sensing with Smartphones
Diese Dissertation behandelt die Frage, wie mit kostengünstiger Sensorik Feinstäube in hoher zeitlicher und räumlicher Auflösung gemessen werden können. Dazu wird ein neues Sensorsystem auf Basis kostengünstiger off-the-shelf-Sensoren und Smartphones vorgestellt, entsprechende robuste Algorithmen zur Signalverarbeitung entwickelt und Erkenntnisse zur Interaktions-Gestaltung für die Messung durch Laien präsentiert.
Atmosphärische Aerosolpartikel stellen im globalen Maßstab ein gravierendes Problem für die menschliche Gesundheit dar, welches sich in Atemwegs- und Herz-Kreislauf-Erkrankungen äußert und eine Verkürzung der Lebenserwartung verursacht. Bisher wird Luftqualität ausschließlich anhand von Daten relativ weniger fester Messstellen beurteilt und mittels Modellen auf eine hohe räumliche Auflösung gebracht, so dass deren Repräsentativität für die flächendeckende Exposition der Bevölkerung ungeklärt bleibt. Es ist unmöglich, derartige räumliche Abbildungen mit den derzeitigen statischen Messnetzen zu bestimmen. Bei der gesundheitsbezogenen Bewertung von Schadstoffen geht der Trend daher stark zu räumlich differenzierenden Messungen.
Ein vielversprechender Ansatz um eine hohe räumliche und zeitliche Abdeckung zu erreichen ist dabei Participatory Sensing, also die verteilte Messung durch Endanwender unter Zuhilfenahme ihrer persönlichen Endgeräte. Insbesondere für Luftqualitätsmessungen ergeben sich dabei eine Reihe von Herausforderungen - von neuer Sensorik, die kostengünstig und tragbar ist, über robuste Algorithmen zur Signalauswertung und Kalibrierung bis hin zu Anwendungen, die Laien bei der korrekten Ausführung von Messungen unterstützen und ihre Privatsphäre schützen.
Diese Arbeit konzentriert sich auf das Anwendungsszenario Partizipatorischer Umweltmessungen, bei denen Smartphone-basierte Sensorik zum Messen der Umwelt eingesetzt wird und üblicherweise Laien die Messungen in relativ unkontrollierter Art und Weise ausführen. Die Hauptbeiträge hierzu sind:
1. Systeme zum Erfassen von Feinstaub mit Smartphones (Low-cost Sensorik und neue Hardware):
Ausgehend von frĂĽher Forschung zur Feinstaubmessung mit kostengĂĽnstiger off-the-shelf-Sensorik wurde ein Sensorkonzept entwickelt, bei dem die Feinstaub-Messung mit Hilfe eines passiven Aufsatzes auf einer Smartphone-Kamera durchgefĂĽhrt wird. Zur Beurteilung der Sensorperformance wurden teilweise Labor-Messungen mit kĂĽnstlich erzeugtem Staub und teilweise Feldevaluationen in Ko-Lokation mit offiziellen Messstationen des Landes durchgefĂĽhrt.
2. Algorithmen zur Signalverarbeitung und Auswertung:
Im Zuge neuer Sensordesigns werden Kombinationen bekannter OpenCV-Bildverarbeitungsalgorithmen (Background-Subtraction, Contour Detection etc.) zur Bildanalyse eingesetzt. Der resultierende Algorithmus erlaubt im Gegensatz zur Auswertung von Lichtstreuungs-Summensignalen die direkte Zählung von Partikeln anhand individueller Lichtspuren. Ein zweiter neuartiger Algorithmus nutzt aus, dass es bei solchen Prozessen ein signalabhängiges Rauschen gibt, dessen Verhältnis zum Mittelwert des Signals bekannt ist. Dadurch wird es möglich, Signale die von systematischen unbekannten Fehlern betroffen sind auf Basis ihres Rauschens zu analysieren und das "echte" Signal zu rekonstruieren.
3. Algorithmen zur verteilten Kalibrierung bei gleichzeitigem Schutz der Privatsphäre:
Eine Herausforderung partizipatorischer Umweltmessungen ist die wiederkehrende Notwendigkeit der Sensorkalibrierung. Dies beruht zum einen auf der Instabilität insbesondere kostengünstiger Luftqualitätssensorik und zum anderen auf der Problematik, dass Endbenutzern die Mittel für eine Kalibrierung üblicherweise fehlen. Bestehende Ansätze zur sogenannten Cross-Kalibrierung von Sensoren, die sich in Ko-Lokation mit einer Referenzstation oder anderen Sensoren befinden, wurden auf Daten günstiger Feinstaubsensorik angewendet sowie um Mechanismen erweitert, die eine Kalibrierung von Sensoren untereinander ohne Preisgabe privater Informationen (Identität, Ort) ermöglicht.
4. Mensch-Maschine-Interaktions-Gestaltungsrichtlinien fĂĽr Participatory Sensing:
Auf Basis mehrerer kleiner explorativer Nutzerstudien wurde empirisch eine Taxonomie der Fehler erstellt, die Laien beim Messen von Umweltinformationen mit Smartphones machen. Davon ausgehend wurden mögliche Gegenmaßnahmen gesammelt und klassifiziert. In einer großen summativen Studie mit einer hohen Teilnehmerzahl wurde der Effekt verschiedener dieser Maßnahmen durch den Vergleich vier unterschiedlicher Varianten einer App zur partizipatorischen Messung von Umgebungslautstärke evaluiert. Die dabei gefundenen Erkenntnisse bilden die Basis für Richtlinien zur Gestaltung effizienter Nutzerschnittstellen für Participatory Sensing auf Mobilgeräten.
5. Design Patterns für Participatory Sensing Games auf Mobilgeräten (Gamification):
Ein weiterer erforschter Ansatz beschäftigt sich mit der Gamifizierung des Messprozesses um Nutzerfehler durch den Einsatz geeigneter Spielmechanismen zu minimieren. Dabei wird der Messprozess z.B. in ein Smartphone-Spiel (sog. Minigame) eingebettet, das im Hintergrund bei geeignetem Kontext die Messung durchführt. Zur Entwicklung dieses "Sensified Gaming" getauften Konzepts wurden Kernaufgaben im Participatory Sensing identifiziert und mit aus der Literatur zu sammelnden Spielmechanismen (Game Design Patterns) gegenübergestellt
Rough Consensus and Running Code: Integrating Engineering Principles into Internet Policy Debates
Symposium: Rough Consensus and Running Code: Integrating Engineering Principles into Internet Policy Debates, held at the University of Pennsylvania\u27s Center for Technology Innovation and Competition on May 6-7, 2010
Protecting Online Privacy in the Digital Age: Carpenter v. United States and the Fourth Amendment\u27s Third-Party Doctrine
The intent of this thesis is to examine the future of the third-party doctrine with the proliferation of technology and the online data we are surrounded with daily, specifically after the United States Supreme Court\u27s decision in Carpenter v. United States. In order to better understand the Supreme Court\u27s reasoning in that case, this thesis will review the history of the third-party doctrine and its roots in United States v. Miller and Smith v. Maryland. A review of Fourth Amendment history and jurisprudence is also crucial to this thesis, as it is imperative that individuals do not forfeit their Constitutional guarantees for the benefit of living in a technologically advanced society. This requires an understanding of the modern-day functional equivalents of papers and effects. Furthermore, this thesis will ultimately answer the following question: Why is it legally significant that we protect at least some data that comes from technologies that our forefathers could have never imagined under the Fourth Amendment?
Looking to the future, this thesis will contemplate solutions on how to move forward in this technology era. It will scrutinize the relevancy of the third-party doctrine due to the rise of technology and the enormous amount of information held about us by third parties. In the past, the Third-Party Doctrine may have been good law, but that time has passed. It is time for the Third-Party Doctrine to be abolished so the Fourth Amendment can join the 21st Century
Automating Data Rights
This report documents the program and the outcomes of Dagstuhl Seminar 18181 “Towards
Accountable Systems”, which took place from April 29th to May 4th, 2018, at Schloss Dagstuhl –
Leibniz Center for Informatics. Researchers and practitioners from academia and industry were
brought together covering broad fields from computer and information science, public policy and
law.
Many risks and opportunities were discussed that relate to the alignment of systems technologies with developing legal and regulatory requirements and evolving user expectations.
This report summarises outcomes of the seminar by highlighting key future research directions
and challenges that lie on the path to developing systems that better align with accountability
concerns
Recommended from our members
Hashtag Holocaust: Negotiating Memory in the Age of Social Media
This study examines the representation of Holocaust memory through photographs on the social media platforms of Flickr and Instagram. It looks at how visitors – armed with digital cameras and smartphones – depicted their experiences at the former concentration camps of Auschwitz-Birkenau, Dachau, Sachsenhausen, and Neuengamme. The study’s arguments are twofold: firstly, social media posts about visits to former concentration camps are a form of Holocaust memory, and secondly, social media allows people from all backgrounds the opportunity to share their memories online. Holocaust memory on social media introduces a new, digital kind of memory called “filtered memory.”
This study demonstrates that social media was a form of memory. The photo-based platforms of Flickr and Instagram helped better visualize it: the photographs on these sites were literally and figuratively “filtered.” Users had the ability to select a black and white filter, or ones that lightened or darkened the photographs. Digital cameras and smartphones allowed users to take as many photos as they liked and upload the photo(s) they wished. Figuratively speaking, people chose to present certain parts of their visits on social media platforms. They filtered their experiences and chose the part of their story they wanted to tell.
Building from the varied fields of memory studies, history of the Holocaust, visual culture, dark tourism, and public history, this study demonstrates that social media is a digital archive that historians must consider when writing about historical memory in the twenty-first century
Content-aware : investigating tools, character & user behavior
Content—Aware serves as a platform for investigating structure, corruption, and visual interference in the context of present-day technologies. I use fragmentation, movement, repetition, and abstraction to interrogate current methods and tools for engaging with the built environment, here broadly conceived as the material, spatial, and cultural products of human labor.
Physical and graphic spaces become grounds for testing visual hypotheses. By testing images and usurping image-making technologies, I challenge the fidelity of vision and representation. Rooted in active curiosity and a willingness to fully engage, I collaborate with digital tools, play with their edges, and build perceptual portholes. Through documentation and curation of visual experience, I expose and challenge a capitalist image infrastructure.
I create, collect, and process images using smartphone cameras, screen recordings, and applications such as Shrub and Photoshop. These devices and programs, which have the capacity to produce visual smoothness and polish, also inherently engender repetition and fragmentation. The same set of tools used to perfect images is easily reoriented towards visual destabilization.
Projects presented here are not meant to serve as literal translations, but rather as symbols or variables in experimental graphic communication strategies. Employing these strategies, I reveal the frames and tools through which we view the world. By exploring and exploiting the limitations of manmade technologies, I reveal the breadth of our human relationships with them, including those of creators, directors, users, and recipients
Investigating perceptions of reliability, efficiency and feasibility of data storage technology: A case study of cloud storage adoption at UCT Faculty of Science
Within an increasing number of organisations cloud storage is becoming more common as large amounts of data from people and projects are being produced, exchanged and stored (Chang & Wills, 2016: 56). In fact, “technology has evolved and has allowed increasingly large and efficient data storage, which in turn has allowed increasingly sophisticated ways to use it (Staff, 2016: n.p.). Thus, the aim of this study is to investigate the perceptions of reliability, efficiency and feasibility of data storage technology. The investigation is done by addressing claims and perceptions of data storage technology within the Faculty of Science at UCT. This study intends to determine if cloud storage is the future of storing, managing and preservation of digital data. The study used a qualitative research method grounded by Management Fashion Theory. Data was collected from three case studies from the Faculty of Science, and also from a desktop internet search on the marketing of cloud storage. Data collection from the case studies was facilitated through semi-structured interviews and from three researchers and academics who are working on cloud storage projects. Main themes that guided the dialogue during data collection originated from reviewed literature. The study concludes that cloud storage is the way forward for storing, sharing and managing research data. Academic researchers find storing data on cloud beneficial; however, it comes with challenges such as costs, security, access, privacy, control and ethics
iURBAN
iURBAN: Intelligent Urban Energy Tool introduces an urban energy tool integrating different ICT energy management systems (both hardware and software) in two European cities, providing useful data to a novel decision support system that makes available the necessary parameters for the generation and further operation of associated business models. The business models contribute at a global level to efficiently manage and distribute the energy produced and consumed at a local level (city or neighbourhood), incorporating behavioural aspects of the users into the software platform and in general prosumers. iURBAN integrates a smart Decision Support System (smartDSS) that collects real-time or near real-time data, aggregates, analyses and suggest actions of energy consumption and production from different buildings, renewable energy production resources, combined heat and power plants, electric vehicles (EV) charge stations, storage systems, sensors and actuators. The consumption and production data is collected via a heterogeneous data communication protocols and networks. The iURBAN smartDSS through a Local Decision Support System allows the citizens to analyse the consumptions and productions that they are generating, receive information about CO2 savings, advises in demand response and the possibility to participate actively in the energy market. Whilst, through a Centralised Decision Support System allow to utilities, ESCOs, municipalities or other authorised third parties to: Get a continuous snapshot of city energy consumption and productionManage energy consumption and productionForecasting of energy consumptionPlanning of new energy "producers" for the future needs of the cityVisualise, analyse and take decisions of all the end points that are consuming or producing energy in a city level, permitting them to forecast and planning renewable power generation available in the city
- …