43 research outputs found

    An Adjectival Interface for procedural content generation

    Get PDF
    Includes abstract.Includes bibliographical references.In this thesis, a new interface for the generation of procedural content is proposed, in which the user describes the content that they wish to create by using adjectives. Procedural models are typically controlled by complex parameters and often require expert technical knowledge. Since people communicate with each other using language, an adjectival interface to the creation of procedural content is a natural step towards addressing the needs of non-technical and non-expert users. The key problem addressed is that of establishing a mapping between adjectival descriptors, and the parameters employed by procedural models. We show how this can be represented as a mapping between two multi-dimensional spaces, adjective space and parameter space, and approximate the mapping by applying novel function approximation techniques to points of correspondence between the two spaces. These corresponding point pairs are established through a training phase, in which random procedural content is generated and then described, allowing one to map from parameter space to adjective space. Since we ultimately seek a means of mapping from adjective space to parameter space, particle swarm optimisation is employed to select a point in parameter space that best matches any given point in adjective space. The overall result, is a system in which the user can specify adjectives that are then used to create appropriate procedural content, by mapping the adjectives to a suitable set of procedural parameters and employing the standard procedural technique using those parameters as inputs. In this way, none of the control offered by procedural modelling is sacrificed â although the adjectival interface is simpler, it can at any point be stripped away to reveal the standard procedural model and give users access to the full set of procedural parameters. As such, the adjectival interface can be used for rapid prototyping to create an approximation of the content desired, after which the procedural parameters can be used to fine-tune the result. The adjectival interface also serves as a means of intermediate bridging, affording users a more comfortable interface until they are fully conversant with the technicalities of the underlying procedural parameters. Finally, the adjectival interface is compared and contrasted to an interface that allows for direct specification of the procedural parameters. Through user experiments, it is found that the adjectival interface presented in this thesis is not only easier to use and understand, but also that it produces content which more accurately reflects usersâ intentions

    Methodologies for distributed and higher dimensional geographic information

    Get PDF
    PhD ThesisIn today's digital era, cartography has changed its role, from that of a pure visual model of the Earth's surface, to an interface to other spatial and aspatial information. Along with this, representationa nd manipulation of graphical information in three-dimensional space is required for many applications. Problems and difficulties must be overcome in order to facilitate the move to three-dimensional models, multimedia, and distributed data. Can accurate measurements, at sufficient resolution, and using affordable resources be obtained? Will application software usefully process, in all aspects, models of the real world, sounds, and videos? Combined with this, the workplace is becoming distributed, requiring applications and data that can be used across the globe as easily as in the office. A distributed, three-dimensional, GIS is required with all the procedural and recording functionality of current two-dimensional systems. Such a GIS would maintain a model, typically comprised of solids of individual buildings, roads, utilities etc. with both external and internal detail, represented on a suitable digital terrain model. This research examines virtual reality software as part of an answer. Alternatively, can technologies such as HTML, VRML, and scripting, along with object-orientation and open systems, allow for the display and interrogation of networked data sets? The particular application of this technology, considered during this research, is the need for accurate reconstruction of historical urban monuments. The construction, manipulation, and exploration of these models is often referred to as virtual heritage. This research constructs an innovative and resource effective methodology, the Phoenix algorithm, which requires only a single image for creating three-dimensional models of buildings at large scale. The development of this algorithm is discussed and the results obtained from it are compared with those obtained using traditional three-dimensional capture techniques. Furthermore, possible solutions to the earlier questions are given and discussed

    Nonlinear Parametric and Neural Network Modelling for Medical Image Classification

    Get PDF
    System identification and artificial neural networks (ANN) are families of algorithms used in systems engineering and machine learning respectively that use structure detection and learning strategies to build models of complex systems by taking advantage of input-output type data. These models play an essential role in science and engineering because they fill the gap in those cases where we know the input-output behaviour of a system, but there is not a mathematical model to understand and predict its changes in future or even prevent threats. In this context, the nonlinear approximation of systems is nowadays very popular since it better describes complex instances. On the other hand, digital image processing is an area of systems engineering that is expanding the analysis dimension level in a variety of real-life problems while it is becoming more attractive and affordable over time. Medicine has made the most of it by supporting important human decision-making processes through computer-aided diagnosis (CAD) systems. This thesis presents three different frameworks for breast cancer detection, with approaches ranging from nonlinear system identification, nonlinear system identification coupled with simple neural networks, to multilayer neural networks. In particular, the nonlinear system identification approaches termed the Nonlinear AutoRegressive with eXogenous inputs (NARX) model and the MultiScales Radial Basis Function (MSRBF) neural networks appear for the first time in image processing. Along with the above contributions takes place the presentation of the Multilayer-Fuzzy Extreme Learning Machine (ML-FELM) neural network for faster training and more accurate image classification. A central research aim is to take advantage of nonlinear system identification and multilayer neural networks to enhance the feature extraction process, while the classification in CAD systems is bolstered. In the case of multilayer neural networks, the extraction is carried throughout stacked autoencoders, a bottleneck network architecture that promotes a data transformation between layers. In the case of nonlinear system identification, the goal is to add flexible models capable of capturing distinctive features from digital images that might be shortly recognised by simpler approaches. The purpose of detecting nonlinearities in digital images is complementary to that of linear models since the goal is to extract features in greater depth, in which both linear and nonlinear elements can be captured. This aim is relevant because, accordingly to previous work cited in the first chapter, not all spatial relationships existing in digital images can be explained appropriately with linear dependencies. Experimental results show that the methodologies based on system identification produced reliable images models with customised mathematical structure. The models came to include nonlinearities in different proportions, depending upon the case under examination. The information about nonlinearity and model structure was used as part of the whole image model. It was found that, in some instances, the models from different clinical classes in the breast cancer detection problem presented a particular structure. For example, NARX models of the malignant class showed higher non-linearity percentage and depended more on exogenous inputs compared to other classes. Regarding classification performance, comparisons of the three new CAD systems with existing methods had variable results. As for the NARX model, its performance was superior in three cases but was overcame in two. However, the comparison must be taken with caution since different databases were used. The MSRBF model was better in 5 out of 6 cases and had superior specificity in all instances, overcoming in 3.5% the closest model in this line. The ML-FELM model was the best in 6 out of 6 cases, although it was defeated in accuracy by 0.6% in one case and specificity in 0.22% in another one

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Rethinking Change

    Get PDF
    UIDB/00417/2020 UIDP/00417/2020No seguimento da Conferência Internacional sobre Arte, Museus e Culturas Digitais (Abril 2021), este e-book pretende aprofundar a discussão sobre o conceito de mudança, geralmente associado à relação entre cultura e tecnologia. Através dos contributos de 32 autores, de 12 países, questiona-se não só a forma como o digital tem motivado novas práticas artísticas e curatoriais, mas também o inverso, observando como propostas críticas e criativas no campo da arte e dos museus têm aberto vias alternativas para o desenvolvimento tecnológico. Assumindo a diversidade de perspectivas sobre o tema, de leituras retrospectivas à análise de questões e projectos recentes, o livro estrutura-se em torno de sete capítulos e um ensaio visual, evidenciando os territórios de colaboração e cruzamento entre diferentes áreas de conhecimento científico. Disponível em acesso aberto, esta publicação resulta de um projecto colaborativo promovido pelo Instituto de História da Arte, Faculdade de Ciências Sociais e Humanas, Universidade NOVA de Lisboa e pelo maat – Museu de Arte, Arquitectura e Tecnologia. Instituição parceira: Instituto Superior Técnico. Mecenas: Fundação Millennium bcp. Media partner: revista Umbigo. Following the International Conference on Art, Museums and Digital Cultures (April 2021), this e-book seeks to extend the discussion on the concept of change that is usually associated with the relationship between culture and technology. Through the contributions of 32 authors from 12 countries, the book not only questions how digital media have inspired new artistic and curatorial practices, but also how, conversely, critical and creative proposals in the fields of art and museums have opened up alternative paths to technological development. Acknowledging the different approaches to the topic, ranging from retrospective readings to the analysis of recent issues and projects, the book is divided into seven sections and a visual essay, highlighting collaborative territories and the crossovers between different areas of scientific knowledge. Available in open access, this publication is the result of a collaborative project promoted by the Institute of Art History of the School of Social Sciences and Humanities, NOVA University of Lisbon and maat – Museum of Art, Architecture and Technology. Partner institution: Instituto Superior Técnico. Sponsor: Millennium bcp Foundation. Media partner: Umbigo magazine.publishersversionpublishe

    Handbook of Mathematical Geosciences

    Get PDF
    This Open Access handbook published at the IAMG's 50th anniversary, presents a compilation of invited path-breaking research contributions by award-winning geoscientists who have been instrumental in shaping the IAMG. It contains 45 chapters that are categorized broadly into five parts (i) theory, (ii) general applications, (iii) exploration and resource estimation, (iv) reviews, and (v) reminiscences covering related topics like mathematical geosciences, mathematical morphology, geostatistics, fractals and multifractals, spatial statistics, multipoint geostatistics, compositional data analysis, informatics, geocomputation, numerical methods, and chaos theory in the geosciences

    Modeling of probe-surface interactions in nanotopographic measurements

    Get PDF
    Contact stylus methods remain important tools in surface roughness measurement, but as metrological capability increases there is growing need for better understanding of the complex interactions between a stylus tip and a surface. For example, questions arise about the smallest scales of topographic features that can be described with acceptable uncertainty, or about how to compare results taken with different types of probe. This thesis uses simulation methods to address some aspects of this challenge. A new modelling and simulation program has been developed and used to examine the measuring of the fine structure of the real and simulated surfaces by the stylus method. Although able to scan any arbitrary surface with any arbitrary stylus shape, the majority of the results given here uses idealized stylus shapes and ‘real’ ground steel surfaces. The simulation is not only used to measure the roughness of the surface but also to show the contacts distribution on the tip when scanning a surface. Surface maps of the fine structure of ground steel surfaces were measured by Atomic Force Microscopy (AFM) to ensure high lateral resolution compared to the capability of the target profilometry instruments. The data collected by the AFM were checked for missing data and interpolated by the scanning probe image processor (SPIP) software. Three basic computer generated stylus tips with different shapes have been used: conical, pyramid and spherical shapes. This work proposes and explores in detail the novel concept of “thresholding” as an adjunct to kinematic contact modelling; the tip is incremented downwards 'into' the surface and resulting contact regions (or islands) compared to the position of the initial kinematic contact. Essentially the research questions have been inquiring into the effectiveness of so-called kinematic contact models by modifying them in various ways and judging whether significantly different results arise. Initial evidence shows that examination of the contact patterns as the threshold increases can identify the intensity with which different asperity regions interact with the stylus. In the context of sections of the ground surface with a total height variation in the order of 500 nm to 1 μm, for example, a 5 nm threshold caused little change in contact sizes from the kinematic point, but 50 nm caused them to grow asymmetrically, eventually picking out the major structures of the surface. The simulations have naturally confirmed that the stylus geometry and size can have a significant effect on most roughness parameters of the measured surface in 3D. Therefore the major contribution is an investigation of the inherent (finite probe) distortions during topographic analysis using a stylus-based instrument. The surprising finding which is worthy of greater investigation, is how insensitive to major changes in stylus condition some of the popular parameters are, even when dealing with very fine structure within localized areas of a ground surface. For these reasons, it is concluded that thresholding is not likely to become a major tool in analysis, although it can certainly be argued that it retains some practical value as a diagnostic of the measurement process. This research will ultimately allow better inter-comparison between measurements from different instruments by allowing a ‘software translator’ between them. Short of fully realizing this ambitious aim, the study also contributes to improving uncertainty models for stylus instruments

    Video Vortex reader : responses to Youtube

    Get PDF
    The Video Vortex Reader is the first collection of critical texts to deal with the rapidly emerging world of online video – from its explosive rise in 2005 with YouTube, to its future as a significant form of personal media. After years of talk about digital convergence and crossmedia platforms we now witness the merger of the Internet and television at a pace no-one predicted. These contributions from scholars, artists and curators evolved from the first two Video Vortex conferences in Brussels and Amsterdam in 2007 which focused on responses to YouTube, and address key issues around independent production and distribution of online video content. What does this new distribution platform mean for artists and activists? What are the alternatives

    An inquiry into the nature of effective dialogue and discourse and peacebuilding through leadership

    Get PDF
    The research study and findings presented in this work underscore the necessity to design and develop effective strategies for inter-paradigm dialogue and discourse for peacebuilding. The study argues that adoption and application of appropriate dialogue strategies impact and engender the nurturing and emergence of a culture of leadership that can foster sustainable peace. Dialogue and discourse processes are considered as being intricately connected to processes of conflict transformation and resolution, and linkages of dialogue, peacebuilding and leadership are mirrored in macro- and micro- spaces of engagement, namely, much contested cultural, political and economic spaces in which myriad and diverse perspectives reside. The potential for peace, it is argued, substantially lies in the formulation and design of contextually-relevant frameworks for equitable and sustainable socio-economic development, and macro-micro intersections play themselves out in the dialogue field within which societies and individuals can seek and strive to anticipate, accommodate, attain and enact their life wisdoms into peaceful systems of co-existence. This view also speaks to the issue of how consensual and sustainable global and regional collaborative enterprise requires the parallel accompaniment of well-configured partnerships in support of cultural responsiveness and social cohesion. Through discussion of appropriate methodologies of dialogue and discourse, the identification and statement of objectives for this study, as well as the design, elaboration and configuration of its research framework, aimed to contribute towards furthering debate surrounding the integration of prevailing theoretical approaches, in order to gain a better understanding of the linkages and dynamics between peacebuilding initiatives, conflict resolution processes, and effective and sustainable leadership. Dialogue is adopted as the key component in the design of an effective model and architecture for peace building. The enquiry underscores emerging gaps that require addressing, and which may then highlight zones of ambiguity, or dialectics between action and practice, and between researcher and practitioner
    corecore