86 research outputs found

    Skyler and Bliss

    Get PDF
    Hong Kong remains the backdrop to the science fiction movies of my youth. The city reminds me of my former training in the financial sector. It is a city in which I could have succeeded in finance, but as far as art goes it is a young city, and I am a young artist. A frustration emerges; much like the mould, the artist also had to develop new skills by killing off his former desires and manipulating technology. My new series entitled HONG KONG surface project shows a new direction in my artistic research in which my technique becomes ever simpler, reducing the traces of pixelation until objects appear almost as they were found and photographed. Skyler and Bliss presents tectonic plates based on satellite images of the Arctic. Working in a hot and humid Hong Kong where mushrooms grow ferociously, a city artificially refrigerated by climate control, this series provides a conceptual image of a imaginary typographic map for survival. (Laurent Segretier

    ECLAP 2012 Conference on Information Technologies for Performing Arts, Media Access and Entertainment

    Get PDF
    It has been a long history of Information Technology innovations within the Cultural Heritage areas. The Performing arts has also been enforced with a number of new innovations which unveil a range of synergies and possibilities. Most of the technologies and innovations produced for digital libraries, media entertainment and education can be exploited in the field of performing arts, with adaptation and repurposing. Performing arts offer many interesting challenges and opportunities for research and innovations and exploitation of cutting edge research results from interdisciplinary areas. For these reasons, the ECLAP 2012 can be regarded as a continuation of past conferences such as AXMEDIS and WEDELMUSIC (both pressed by IEEE and FUP). ECLAP is an European Commission project to create a social network and media access service for performing arts institutions in Europe, to create the e-library of performing arts, exploiting innovative solutions coming from the ICT

    The Deleuzian Cineaste: placing movement at the heart of film analysis

    Get PDF
    In this thesis, routine interest in visual images as fundamental to film studies is displaced in favor of a focus on movement. Gilles Deleuze’s Cinema books provide the foundation but mediation between their philosophical intentions and the demands of film analysis becomes necessary. The figure of the Deleuzian cineaste is constructed as the means to identify a systematic approach to filmic movement in its many forms and to demonstrate subsequent analysis based on movement

    Génération des séquences de désassemblage et leur évaluation : Intégration dans un environnement de réalité virtuelle

    Get PDF
    Integration of disassembly operations during product design is an important issue today. It is estimated that at the earliest stages of product design, the cost of disassembly operations almost represents 30 % of its total cost. Nowadays, disassembly operation simulation of industrial products finds a strong interest in interactive simulations through immersive and real-time schemes. In this context, in the first place, this thesis presents a method for generating the feasible disassembly sequences for selective disassembly. The method is based on the lowest levels of a disassembly product graph. Instead of considering the geometric constraints for each pair of components, the proposed method considers the geometric contact and collision relationships among the components in order to generate the so-called Disassembly Geometry Contacting Graph (DGCG). The latter is then used for disassembly sequence generation thus allowing the number of possible sequences to be reduced by ignoring any components which are unrelated to the target. A simulation framework was developed integrated in a Virtual reality environment thus allowing generating the minimum number of possible disassembly sequences. Secondly, a method for disassembly operation evaluation by 3D geometric removability analysis in a Virtual environment is proposed. It is based on seven new criteria which are: visibility of a part, disassembly angles, number of tools' changes, path orientation changing, sub-assembly stability, neck score and bending score. All criteria are presented by dimensionless coefficients automatically calculated, thus allowing evaluating disassembly sequences complexity. For this purpose, a mixed virtual reality disassembly environment (VRDE) is developed based on Python programming language, utilizing VTK (Visualization Toolkit) and ODE (Open Dynamics Engine) libraries. The framework is based on STEP, WRL and STL exchange formats. The analysis results and findings demonstrate the feasibility of the proposed approach thus providing significant assistance for the evaluation of disassembly sequences during Product Development Process (PDP). Further consequences of the present work consist in ranking the criteria according to their importance. For this purpose, moderation coefficients may be allocated to each of them thus allowing a more comprehensive evaluating method.De nos jours, l'intĂ©gration des opĂ©rations de dĂ©sassemblage lors de la conception des produits est un enjeu crucial. On estime que dans la phase initiale de la conception d'un produit, le coĂ»t des opĂ©rations de dĂ©sassemblage reprĂ©sente environ 30% de son coĂ»t total. Ainsi, la simulation des opĂ©rations de dĂ©sassemblage de produits industriels trouve un fort intĂ©rĂȘt pour des simulations interactives grĂące Ă  des programmes d'immersion et en temps rĂ©el. Dans ce contexte, dans un premier temps, cette thĂšse prĂ©sente une mĂ©thode de gĂ©nĂ©ration des sĂ©quences de dĂ©sassemblage possibles pour le dĂ©sassemblage sĂ©lectif. La mĂ©thode est basĂ©e sur les niveaux les plus bas du graphe de dĂ©sassemblage des produits. Au lieu de considĂ©rer les contraintes gĂ©omĂ©triques pour chaque paire de composants, la mĂ©thode proposĂ©e tient compte des contacts (relations gĂ©omĂ©triques entre les composants) et des collisions afin de gĂ©nĂ©rer le Graphe GĂ©omĂ©trique de Contacts et de DĂ©sassemblage (DGCG). Celui-ci est ensuite utilisĂ© pour la gĂ©nĂ©ration des sĂ©quences de dĂ©sassemblage permettant ainsi de rĂ©duite le nombre de sĂ©quences possibles en ignorant les composants non liĂ©s avec la cible. Une application de simulation a Ă©tĂ© dĂ©veloppĂ©e, intĂ©grĂ©e dans un environnement de rĂ©alitĂ© virtuelle (RV) permettant ainsi la gĂ©nĂ©ration du nombre minimum de sĂ©quences possibles de dĂ©sassemblage.Dans un second temps, une mĂ©thode d'Ă©valuation des opĂ©rations de dĂ©sassemblage par analyse gĂ©omĂ©trique 3D de l'amovibilitĂ© dans un environnement RV est proposĂ©e. Elle est basĂ©e sur sept nouveaux critĂšres qui sont: la visibilitĂ© d'une piĂšce, les angles de dĂ©sassemblage, le nombre des changements d'outils, le changement d'orientation des trajectoires, la stabilitĂ© des sous-ensembles, les angles de rotation du cou et flexion du corps. Tous ces critĂšres sont prĂ©sentĂ©s par des coefficients sans dimension calculĂ©s automatiquement par l'application dĂ©veloppĂ©e, permettant ainsi d'Ă©valuer la complexitĂ© des sĂ©quences de dĂ©sassemblage. A cet effet, un environnement mixte de rĂ©alitĂ© virtuelle pour le dĂ©sassemblage (VRDE) est dĂ©veloppĂ©, basĂ© sur le langage de programmation Python, en utilisant deux bibliothĂšques : VTK (Visualisation Toolkit) et ODE (Open Dynamics Engine), les formats d'Ă©change Ă©tant fichiers: STEP, WRL et STL. L'analyse des rĂ©sultats obtenus dĂ©montrent la fiabilitĂ© de l'approche proposĂ©e fournissant ainsi une aide non nĂ©gligeable pour l'Ă©valuation des sĂ©quences de dĂ©sassemblage lors de processus de dĂ©veloppement de produits (PDP). Les autres consĂ©quences de ce travail consistent Ă  classer les critĂšres en fonction de leur importance. A cet effet, des coefficients de modĂ©ration peuvent ĂȘtre attribuĂ©s Ă  chacun d'eux permettant ainsi une mĂ©thode d'Ă©valuation plus complĂšte

    Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.

    Get PDF
    By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”. Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product. Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations

    Configurable nD-visualization for complex Building Information Models

    Get PDF
    With the ongoing development of building information modelling (BIM) towards a comprehensive coverage of all construction project information in a semantically explicit way, visual representations became decoupled from the building information models. While traditional construction drawings implicitly contained the visual representation besides the information, nowadays they are generated on the fly, hard-coded in software applications dedicated to other tasks such as analysis, simulation, structural design or communication. Due to the abstract nature of information models and the increasing amount of digital information captured during construction projects, visual representations are essential for humans in order to access the information, to understand it, and to engage with it. At the same time digital media open up the new field of interactive visualizations. The full potential of BIM can only be unlocked with customized task-specific visualizations, with engineers and architects actively involved in the design and development process of these visualizations. The visualizations must be reusable and reliably reproducible during communication processes. Further, to support creative problem solving, it must be possible to modify and refine them. This thesis aims at reconnecting building information models and their visual representations: on a theoretic level, on the level of methods and in terms of tool support. First, the research seeks to improve the knowledge about visualization generation in conjunction with current BIM developments such as the multimodel. The approach is based on the reference model of the visualization pipeline and addresses structural as well as quantitative aspects of the visualization generation. Second, based on the theoretic foundation, a method is derived to construct visual representations from given visualization specifications. To this end, the idea of a domain-specific language (DSL) is employed. Finally, a software prototype proofs the concept. Using the visualization framework, visual representations can be generated from a specific building information model and a specific visualization description.Mit der fortschreitenden Entwicklung des Building Information Modelling (BIM) hin zu einer umfassenden Erfassung aller Bauprojektinformationen in einer semantisch expliziten Weise werden Visualisierungen von den GebĂ€udeinformationen entkoppelt. WĂ€hrend traditionelle Architektur- und Bauzeichnungen die visuellen ReprĂ€Ìˆsentationen implizit als TrĂ€ger der Informationen enthalten, werden sie heute on-the-fly generiert. Die Details ihrer Generierung sind festgeschrieben in Softwareanwendungen, welche eigentlich fĂŒr andere Aufgaben wie Analyse, Simulation, Entwurf oder Kommunikation ausgelegt sind. Angesichts der abstrakten Natur von Informationsmodellen und der steigenden Menge digitaler Informationen, die im Verlauf von Bauprojekten erfasst werden, sind visuelle ReprĂ€sentationen essentiell, um sich die Information erschließen, sie verstehen, durchdringen und mit ihnen arbeiten zu können. Gleichzeitig entwickelt sich durch die digitalen Medien eine neues Feld der interaktiven Visualisierungen. Das volle Potential von BIM kann nur mit angepassten aufgabenspezifischen Visualisierungen erschlossen werden, bei denen Ingenieur*innen und Architekt*innen aktiv in den Entwurf und die Entwicklung dieser Visualisierungen einbezogen werden. Die Visualisierungen mĂŒssen wiederverwendbar sein und in Kommunikationsprozessen zuverlĂ€ssig reproduziert werden können. Außerdem muss es möglich sein, Visualisierungen zu modifizieren und neu zu definieren, um das kreative Problemlösen zu unterstĂŒtzen. Die vorliegende Arbeit zielt darauf ab, GebĂ€udemodelle und ihre visuellen ReprĂ€sentationen wieder zu verbinden: auf der theoretischen Ebene, auf der Ebene der Methoden und hinsichtlich der unterstĂŒtzenden Werkzeuge. Auf der theoretischen Ebene trĂ€gt die Arbeit zunĂ€chst dazu bei, das Wissen um die Erstellung von Visualisierungen im Kontext von Bauprojekten zu erweitern. Der verfolgte Ansatz basiert auf dem Referenzmodell der Visualisierungspipeline und geht dabei sowohl auf strukturelle als auch auf quantitative Aspekte des Visualisierungsprozesses ein. Zweitens wird eine Methode entwickelt, die visuelle ReprĂ€sentationen auf Basis gegebener Visualisierungsspezifikationen generieren kann. Schließlich belegt ein Softwareprototyp die Realisierbarkeit des Konzepts. Mit dem entwickelten Framework können visuelle ReprĂ€sentationen aus jeweils einem spezifischen GebĂ€udemodell und einer spezifischen Visualisierungsbeschreibung generiert werden

    Orchestrator selection process for cloud-native machine learning experimentation

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringMachine learning (ML) model development is a very experimental, repetitive, and error prone task, because ML is itself very obscure - there is no way to know what model works best for our goals beforehand, so practitioners have an incentive to experiment with as many models, approaches and techniques as they can. Additionally, going from raw data to a well adjusted model is a delicate process that often requires complex, multi-step pipelines. Combine the two factors and it becomes apparent how easy it is to get lost within a sea of artifacts and results without a well defined process, hindering the development process with poor reusability, lots of technical debt, and integration-hell. This makes adherence to best practices - MLOps - paramount. However, with the recent boom experienced in this field came a plethora of different tools and services, all trying to satisfy different subsets of needs of the model life cycle, meaning that, more often than not, ML practitioners do not know what the best set of tools for their use case might be. The experimental nature of ML means we should indeed try different tools, but there is a high risk that it might not fit the necessary requirements, generating needless costs. One particularly relevant type of tool is the orchestrator - a central piece of the experimentation process which controls the communication and execution of the components of a model pipeline. This work follows the creation process for an enterprise ML cloud environment, with particular focus on the selection of an adequate orchestrator for cloud-native setups. Additionally, it presents MetaTool, a web application designed to speed up future tool selection processes by leveraging knowledge gathered during previous instances. Finally, it reaches two key conclusions: first, broader organizational factors that might seem out of scope can influence or even alter the final choice, and second, although using a tool like MetaTool might speed up the decision-making process, it requires significant organizational commitment.O desenvolvimento de modelos de machine learning (ML) Ă© uma atividade muito experimental, repetitiva e propĂ­cia a erros, porque ML Ă© muito obscura - nĂŁo hĂĄ forma de saber de antemĂŁo qual o modelo mais adequado para os nossos objetivos, pelo que os praticantes tĂȘm um incentivo para experimentar com o maior nĂșmero possĂ­vel de modelos, abordagens e tĂ©cnicas que conseguirem. Adicionalmente, passar de dados para um modelo bem ajustado Ă© um processo delicado que frequentemente requer pipelines complexas e com vĂĄrios passos. Combinando os dois fatores fica aparente o quĂŁo fĂĄcil Ă© ficar perdido num mar de artefactos e resultados sem um processo bem definido, dificultando o processo de desenvolvimento com fraca capacidade de reutilização, muita technical debt, e integration hell. Isto torna a adesĂŁo Ă s melhores prĂĄticas - MLOps - imperativa. Contudo, com o recente avanço verificado neste domĂ­nio veio uma abundĂąncia de diferentes ferramentas e serviços, todos tentando satisfazer diferentes subconjuntos de necessidades do ciclo de vida dos modelos, pelo que os praticantes de ML acabam frequentemente na dĂșvida de qual poderĂĄ ser o melhor conjunto de ferramentas para os seus casos de uso. A natureza experimental de ML faz com que se devam experimentar diferentes ferramentas, mas hĂĄ um grande risco de escolher algo nĂŁo satisfaça os requisitos necessĂĄrios, levando a custos desnecessĂĄrios. Uma categoria de ferramentas particularmente relevantes sĂŁo os orquestradores - uma peça central no processo de experimentação que controla a comunicação e execução dos componentes da pipeline do modelo. Este trabalho acompanha a criação dum ambiente cloud industrial para ML, com particular foco na escolha do orquestrador adequado para ambientes na nuvem. Adicionalmente, apresenta MetaTool, uma aplicação web pensada para acelerar futuros processos de tomada de decisĂŁo empregando conhecimento adquirido durante processos anteriores. Finalmente, alcança duas conclusĂ”es chave: primeiro, fatores organizacionais aparentemente irrelevantes podem influenciar ou atĂ© alterar a escolha final, e segundo, apesar de ferramentas como MetaTool poderem acelerar o processo de tomada de decisĂŁo, requerem um empenho da organização

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains
    • 

    corecore