131 research outputs found

    Tietokierto ilmakehäfysiikassa : mitatusta millivoltista ilmakehän ymmärtämiseen

    Get PDF
    In this thesis the concept of data cycle is introduced. The concept itself is general and only gets the real content when the field of application is defined. If applied in the field of atmospheric physics the data cycle includes measurements, data acquisition, processing, analysis and interpretation. The atmosphere is a complex system in which everything is in a constantly moving equilibrium. The scientific community agrees unanimously that it is human activity, which is accelerating the climate change. Nevertheless a complete understanding of the process is still lacking. The biggest uncertainty in our understanding is connected to the role of nano- to micro-scale atmospheric aerosol particles, which are emitted to the atmosphere directly or formed from precursor gases. The latter process has only been discovered recently in the long history of science and links nature s own processes to human activities. The incomplete understanding of atmospheric aerosol formation and the intricacy of the process has motivated scientists to develop novel ways to acquire data, new methods to explore already acquired data, and unprecedented ways to extract information from the examined complex systems - in other words to compete a full data cycle. Until recently it has been impossible to directly measure the chemical composition of precursor gases and clusters that participate in atmospheric particle formation. However, with the arrival of the so-called atmospheric pressure interface time-of-flight mass spectrometer we are now able to detect atmospheric ions that are taking part in particle formation. The amount of data generated from on-line analysis of atmospheric particle formation with this instrument is vast and requires efficient processing. For this purpose dedicated software was developed and tested in this thesis. When combining processed data from multiple instruments, the information content is increasing which requires special tools to extract useful information. Source apportionment and data mining techniques were explored as well as utilized to investigate the origin of atmospheric aerosol in urban environments (two case studies: Krakow and Helsinki) and to uncover indirect variables influencing the atmospheric formation of new particles.Tässä työssä esitellään konsepti - tietokierto ilmakehätieteissä. Tietokierto on sinänsä yleinen käsite ja ei liity mihinkään tiettyyn tieteenalaan. Tietokierto huomioi jokaisen vaiheen raa asta mittausarvosta datan soveltamiseen, ymmärtämiseen ja tulkintaan. Ilmakehäfysiikassa tietokierto sisältää vaiheet signaalin havainnoinnista, datan keräämiseen, esikäsittelyyn, ja työstämiseen sekä sitä kautta tulkintaan. Ilmakehä on monimutkainen kokonaisuus, jossa kaikki on jatkuvasti muuttuvassa tasapainossa keskenään. Tiedeyhteisö on yksimielisesti sitä mieltä, että kiihtyvä ilmastonmuutos on ihmisen toiminnan seurausta. Tarkalleen sitä prosessia ei kuitenkaan tunneta. Suurin epävarmuus ymmärryksessä on pienhiukkasten aiheuttama vaikutus ilmastomuutokseen. Pienhiukkasia päätyy ilmakehään joko suoraan päästölähteistä tai ne muodostuvat nukleaation eli kaasu-hiukkasmuuntuman kautta. Viimeksi mainittu ilmiö on havaittu vasta hiljattain ja sen yksityiskohtainen ymmärrys vielä puuttuu. Ilmiön monimutkaisuus on kiehtonut ja motivoinut tutkijoita kehittämään uusia mittalaitteistoja, mittausmenetelmiä, datan analysointimenetelmiä ja uusia tapoja suodattaa tietoa jo kerätystä datasta - toisin sanoen täydentää ja parantaa tietokiertoa. Aikaisemmin on ollut mahdotonta mitata suoraan kaasu-hiukkasmuuntumisessa osallistuvien kaasujen kemiallista koostumusta. Tässä työssä käytetty laitteisto (ilmakehäpaineliitännäinen lentoaikamassaspektrometri, APiTOF) pystyy havaitsemaan kyseisiä kaasuja suoraan ilman esikäsittelyä. Koska laitteisto on uusi ja sen tuottama data määrä on iso, kehitettiin tässä työssä tehokas raakadatan esikäsittelymenetelmä ja työkalu. Kun yhdistetään prosessoitu data useista laitteista, informaation sisältö kasvaa, mutta sen esille saaminen hankaloituu. Tässä työssä kehitettiin ja käytettiin menetelmiä ilmamassojen päästölähdekartoitukseen, tarkoituksena selvittää kaupunginympäristön pahimmat saastuttajat ja päästölähteet. Datan louhintaa hyödynnettiin löytämään kaasu-hiukkasmuuntumaan vaikuttavia tekijöitä

    Intelligent Systems

    Get PDF
    This book is dedicated to intelligent systems of broad-spectrum application, such as personal and social biosafety or use of intelligent sensory micro-nanosystems such as "e-nose", "e-tongue" and "e-eye". In addition to that, effective acquiring information, knowledge management and improved knowledge transfer in any media, as well as modeling its information content using meta-and hyper heuristics and semantic reasoning all benefit from the systems covered in this book. Intelligent systems can also be applied in education and generating the intelligent distributed eLearning architecture, as well as in a large number of technical fields, such as industrial design, manufacturing and utilization, e.g., in precision agriculture, cartography, electric power distribution systems, intelligent building management systems, drilling operations etc. Furthermore, decision making using fuzzy logic models, computational recognition of comprehension uncertainty and the joint synthesis of goals and means of intelligent behavior biosystems, as well as diagnostic and human support in the healthcare environment have also been made easier

    GVSU Press Releases, 2009

    Get PDF
    A compilation of press releases for the year 2009 submitted by University Communications (formerly News & Information Services) to news agencies concerning the people, places, and events related to Grand Valley State University

    Näkymäpohjainen RDF-haku

    Get PDF

    Semantischer Schutz und Personalisierung von Videoinhalten. PIAF: MPEG-kompatibles Multimedia-Adaptierungs-Framework zur Bewahrung der vom Nutzer wahrgenommenen Qualität.

    Get PDF
    UME is the notion that a user should receive informative adapted content anytime and anywhere. Personalization of videos, which adapts their content according to user preferences, is a vital aspect of achieving the UME vision. User preferences can be translated into several types of constraints that must be considered by the adaptation process, including semantic constraints directly related to the content of the video. To deal with these semantic constraints, a fine-grained adaptation, which can go down to the level of video objects, is necessary. The overall goal of this adaptation process is to provide users with adapted content that maximizes their Quality of Experience (QoE). This QoE depends at the same time on the level of the user's satisfaction in perceiving the adapted content, the amount of knowledge assimilated by the user, and the adaptation execution time. In video adaptation frameworks, the Adaptation Decision Taking Engine (ADTE), which can be considered as the "brain" of the adaptation engine, is responsible for achieving this goal. The task of the ADTE is challenging as many adaptation operations can satisfy the same semantic constraint, and thus arising in several feasible adaptation plans. Indeed, for each entity undergoing the adaptation process, the ADTE must decide on the adequate adaptation operator that satisfies the user's preferences while maximizing his/her quality of experience. The first challenge to achieve in this is to objectively measure the quality of the adapted video, taking into consideration the multiple aspects of the QoE. The second challenge is to assess beforehand this quality in order to choose the most appropriate adaptation plan among all possible plans. The third challenge is to resolve conflicting or overlapping semantic constraints, in particular conflicts arising from constraints expressed by owner's intellectual property rights about the modification of the content. In this thesis, we tackled the aforementioned challenges by proposing a Utility Function (UF), which integrates semantic concerns with user's perceptual considerations. This UF models the relationships among adaptation operations, user preferences, and the quality of the video content. We integrated this UF into an ADTE. This ADTE performs a multi-level piecewise reasoning to choose the adaptation plan that maximizes the user-perceived quality. Furthermore, we included intellectual property rights in the adaptation process. Thereby, we modeled content owner constraints. We dealt with the problem of conflicting user and owner constraints by mapping it to a known optimization problem. Moreover, we developed the SVCAT, which produces structural and high-level semantic annotation according to an original object-based video content model. We modeled as well the user's preferences proposing extensions to MPEG-7 and MPEG-21. All the developed contributions were carried out as part of a coherent framework called PIAF. PIAF is a complete modular MPEG standard compliant framework that covers the whole process of semantic video adaptation. We validated this research with qualitative and quantitative evaluations, which assess the performance and the efficiency of the proposed adaptation decision-taking engine within PIAF. The experimental results show that the proposed UF has a high correlation with subjective video quality evaluation.Der Begriff "Universal Multimedia Experience" (UME) beschreibt die Vision, dass ein Nutzer nach seinen individuellen Vorlieben zugeschnittene Videoinhalte konsumieren kann. In dieser Dissertation werden im UME nun auch semantische Constraints berücksichtigt, welche direkt mit der Konsumierung der Videoinhalte verbunden sind. Dabei soll die Qualität der Videoerfahrung für den Nutzer maximiert werden. Diese Qualität ist in der Dissertation durch die Benutzerzufriedenheit bei der Wahrnehmung der Veränderung der Videos repräsentiert. Die Veränderung der Videos wird durch eine Videoadaptierung erzeugt, z.B. durch die Löschung oder Veränderung von Szenen, Objekten, welche einem semantischen Constraints nicht entsprechen. Kern der Videoadaptierung ist die "Adaptation Decision Taking Engine" (ADTE). Sie bestimmt die Operatoren, welche die semantischen Constraints auflösen, und berechnet dann mögliche Adaptierungspläne, die auf dem Video angewandt werden sollen. Weiterhin muss die ADTE für jeden Adaptierungsschritt anhand der Operatoren bestimmen, wie die Vorlieben des Nutzers berücksichtigt werden können. Die zweite Herausforderung ist die Beurteilung und Maximierung der Qualität eines adaptierten Videos. Die dritte Herausforderung ist die Berücksichtigung sich widersprechender semantischer Constraints. Dies betrifft insbesondere solche, die mit Urheberrechten in Verbindung stehen. In dieser Dissertation werden die oben genannten Herausforderungen mit Hilfe eines "Personalized video Adaptation Framework" (PIAF) gelöst, welche auf den "Moving Picture Expert Group" (MPEG)-Standard MPEG-7 und MPEG-21 basieren. PIAF ist ein Framework, welches den gesamten Prozess der Videoadaptierung umfasst. Es modelliert den Zusammenhang zwischen den Adaptierungsoperatoren, den Vorlieben der Nutzer und der Qualität der Videos. Weiterhin wird das Problem der optimalen Auswahl eines Adaptierungsplans für die maximale Qualität der Videos untersucht. Dafür wird eine Utility Funktion (UF) definiert und in der ADTE eingesetzt, welche die semantischen Constraints mit den vom Nutzer ausgedrückten Vorlieben vereint. Weiterhin ist das "Semantic Video Content Annotation Tool" (SVCAT) entwickelt worden, um strukturelle und semantische Annotationen durchzuführen. Ebenso sind die Vorlieben der Nutzer mit MPEG-7 und MPEG-21 Deskriptoren berücksichtigt worden. Die Entwicklung dieser Software-Werkzeuge und Algorithmen ist notwendig, um ein vollständiges und modulares Framework zu erhalten. Dadurch deckt PIAF den kompletten Bereich der semantischen Videoadaptierung ab. Das ADTE ist in qualitativen und quantitativen Evaluationen validiert worden. Die Ergebnisse der Evaluation zeigen unter anderem, dass die UF im Bereich Qualität eine hohe Korrelation mit der subjektiven Wahrnehmung von ausgewählten Nutzern aufweist

    A Dynamic Behavioral Biometric Approach to Authenticate Users Employing Their Fingers to Interact with Touchscreen Devices

    Get PDF
    The use of mobile devices has extended to all areas of human life and has changed the way people work and socialize. Mobile devices are susceptible to getting lost, stolen, or compromised. Several approaches have been adopted to protect the information stored on these devices. One of these approaches is user authentication. The two most popular methods of user authentication are knowledge based and token based methods but they present different kinds of problems. Biometric authentication methods have emerged in recent years as a way to deal with these problems. They use an individual’s unique characteristics for identification and have proven to be somewhat effective in authenticating users. Biometric authentication methods also present several problems. For example, they aren’t 100% effective in identifying users, some of them are not well perceived by users, others require too much computational effort, and others require special equipment or special postures by the user. Ultimately their implementation can result in unauthorized use of the devices or the user being annoyed by the implementation. New ways of interacting with mobile devices have emerged in recent years. This makes it necessary for authentication methods to adapt to these changes and take advantage of them. For example, the use of touchscreens has become prevalent in mobile devices, which means that biometric authentication methods need to adapt to it. One important aspect to consider when adopting these new methods is their acceptance of these methods by users. The Technology Acceptance Model (TAM) states that system use is a response that can be predicted by user motivation. This work presents an authentication method that can constantly verify the user’s identity which can help prevent unauthorized use of a device or access to sensitive information. The goal was to authenticate people while they used their fingers to interact with their touchscreen mobile devices doing ordinary tasks like vertical and horizontal scrolling. The approach used six biometric traits to do the authentication. The combination of those traits allowed for authentication at the beginning and at the end of a finger stroke. Support Vector Machines were employed and the best results obtained show Equal Error Rate values around 35%. Those results demonstrate the potential of the approach to verify a person’s identity. Additionally, this works tested the acceptance of the approach among participants, which can influence its eventual adoption. An acceptance level of 80% was obtained which compares favorably against other behavioral biometric approaches

    Transforming visitor experience with museum technologies: The development and impact evaluation of a recommender system in a physical museum

    Get PDF
    Over the past few decades, many attempts have been made to develop recommender systems (RSs) that could improve visitor experience (VX) in physical museums. Nevertheless, to determine the effectiveness of a museum RS, studies often encompass system performance evaluations, e.g., user experience (UX) and accuracy level tests, and rarely extend to the VX realm that museum RSs aim to support. The reported challenges with defining and evaluating VX might explain why the evidence that the interaction with an RS during the visit can enhance the quality of VX remains limited. Without this evidence, however, the purpose of developing museum RSs and the benefits of using RSs during a museum visit are in question. This thesis interrogates whether and how museum RSs can impact VX. It first consolidates the literature about VX-related constructs into one coherent analytical framework of museum experience which delineates the scope of VX. Following this analysis, this research develops and validates a VX instrument with cognitive, introspective, restorative, and affective variables which could be used to evaluate VX with or without museum technologies. Then, through a series of UX- and VX-related studies in the physical museum, this research implements a fully working content-based RS and establishes how the interaction with the developed RS transforms VX. The findings in this thesis demonstrate that the impact of an RS on the quality of VX can depend on the level of engagement with the system during a museum visit. Additionally, the impact can be insufficient on some mental processes within VX, and it can vary following the changes in contextual variables. The findings also reinforce that system performance tests cannot replace a VX-focused analysis, because a positive UX and additional information about museum objects in an RS do not imply an improved VX. Therefore, this thesis underscores that more VX-related evaluations of museum RSs are required to identify how to strengthen and extend their influence on the quality of VX

    Landscape brief for Egyptian desert new towns

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:D79953 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation

    Incorporating Environmental Stimuli into the Service Profit Chain in a Retail Grocery Context: A Structural Equation Modelling Approach.

    Get PDF
    Several theoretical contributions are highlighted; firstly that employee environmental stimuli construct contained five sub-factors, these were termed, E-design, E-music, E-lighting, E-olfaction and E-layout. This highlights the complexities of the environmental stimuli for employees. Furthermore this research found a significant direct link between employee environmental stimuli and employee satisfaction. Considering the literature examining the effects of environmental stimuli on employee behaviour is astonishingly scant (Skandrani et al., 2011), this is an important contribution to several literature streams. Secondly, examining a global configuration of the environmental stimuli can provide a fuller framework for understanding and exploring customer and employee behavioural responses. In particular, customer environmental stimuli should be examined as a multidimensional construct, consisting of five sub-factors, Design, Music, Lighting, Olfaction and Layout. In addition, as the environmental stimuli construct is found to be separate from service quality and serves as an antecedent to service quality; this is a significant contribution to the debate surrounding the multidimensionality of the service quality construct
    corecore