341 research outputs found

    INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS

    Get PDF
    Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions. With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants. IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS. This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in 12 each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency. The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems

    Interactive optimisation for high-lift design.

    Get PDF
    Interactivity always involves two entities; one of them by default is a human user. The specialised subject of human factors is introduced in the context of computational aerodynamics and optimisation, specifically a high-lift aerofoil. The trial and error nature of a design process hinges on designer’s knowledge, skill and intuition. A basic, important assumption of a man-machine system is that in solving a problem, there are some steps in which the computer has an advantageous edge while in other steps a human has dominance. Computational technologies are now an indispensable part of aerospace technology; algorithms involving significant user interaction, either during the process of generating solutions or as a component of post-optimisation evaluation where human decision making is involved are increasingly becoming popular, multi-objective particle swarm is one such optimiser. Several design optimisation problems in engineering are by nature multi-objective; the interest of a designer lies in simultaneous optimisation against two or more objectives which are usually in conflict. Interactive optimisation allows the designer to understand trade-offs between various objectives, and is generally used as a tool for decision making. The solution to a multi-objective problem, one where betterment in one objective occurs over the deterioration of at least one other objective is called a Pareto set. There are multiple solutions to a problem and multiple betterment ideas to an already existing design. The final responsibility of identifying an optimal solution or idea rests on the design engineers and decision making is done based on quantitative metrics, displayed as numbers or graphs. However, visualisation, ergonomics and human factors influence and impact this decision making process. A visual, graphical depiction of the Pareto front is oftentimes used as a design aid tool for purposes of decision making with chances of errors and fallacies fundamentally existing in engineering design. An effective visualisation tool benefits complex engineering analyses by providing the decision-maker with a good imagery of the most important information. Two high-lift aerofoil data-sets have been used as test-case examples; a multi-element solver, an optimiser based on swarm intelligence technique, and visual techniques which include parallel co-ordinates, heat map, scatter plot, self-organising map and radial coordinate visualisation comprise the module. Factors that affect optima and various evaluation criteria have been studied in light of the human user. This research enquires into interactive optimisation by adapting three interactive approaches: information trade-off, reference point and classification, and investigates selected visualisation techniques which act as chief aids in the context of high-lift design trade studies. Human-in-the-loop engineering, man-machine interaction & interface along with influencing factors, reliability, validation and verification in the presence of design uncertainty are considered. The research structure, choice of optimiser and visual aids adapted in this work are influenced by and streamlined to fit with the parallel on-going development work on Airbus’ Python based tool. Results, analysis, together with literature survey are presented in this report. The words human, user, engineer, aerodynamicist, designer, analyst and decision-maker/ DM are synonymous, and are used interchangeably in this research. In a virtual engineering setting, for an efficient interactive optimisation task, a suitable visualisation tool is a crucial prerequisite. Various optimisation design tools & methods are most useful when combined with a human engineer's insight is the underlying premise of this work; questions such as why, what, how might help aid aeronautical technical innovation.PhD in Aerospac

    Typogenetic design - aesthetic decision support for architectural shape generation

    Get PDF
    Typogenetic Design is an interactive computational design system combining generative design, evolutionary search and architectural optimisation technology. The active tool for supporting design decisions during architectural shape generation uses an aesthetic system to guide the search process. This aesthetic system directs the search process toward preferences expressed interactively by the designer. An image input as design reference is integrated by means of shape comparison to provide direction to the exploratory search. During the shape generation process, the designer can choose solutions interactively in a graphical user interface. Those choices are then used to support the selection process as part of the fitness function by online classification. Enhancing human decision making capabilities in human-in-the-loop design systems addresses the complexity of architecture in respect to aesthetic requirements. On the strength of machine learning, the integral performance trade-off during multi-criteria optimisation was extended to address aesthetic preferences. The tacit knowledge and subjective understanding of designers can be used in the shape generation process based on interactive mechanisms. As a result, an integrated support system for performance-based design was developed and tested. Closing the loop from design to construction using design optimisation of structural nodes in a set of case studies confirmed the need for intuitive design systems, interfaces and mechanisms to make architectural optimisation more accessible and intuitive to handle. This dissertation investigated Typogenetic Design as a tool for initial morphological search. Novel instruments for human interaction with design systems were developed using mixed-method research. The present investigation consists of an in-depth technological enquiry into the use of interactive generative design for exploratory search as an integrated support system for performance-based design. Associated project-based research on the design potential of Typogenetic Design showcases the application of the design system for architecture. Generative design as an expressive tool to produce architectural geometries was investigated in regard to its ability to drive initial morphological search of complex geometries. The reinterpretation of processes and boosting of productivity by artificial intelligence was instrumental in exploring a holistic approach combining quantitative and qualitative criteria in a human-in-the-loop system. The shift in focus from an objective to a subjective understanding of computational design processes indicates a perspective change from optimisation to learning as a computational paradigm. Integrating learning capabilities in architectural optimisation enhances the capability of architects to explore large design spaces of emergent representations using evolutionary search. The shift from design automation to interactive generative design introduces the possibility for designers to evaluate shape solutions based on their knowledge and expertise to the computational system. At the same time, the aesthetic system is trained in adaptation to the choices made by the designer. Furthermore, an initial image input allows the designer to add a design reference to the Typogenetic Design process. Shape comparison using a similarity measure provides additional guidance to the architectural shape generation using grammar evolution. Finally, a software prototype was built and tested by means of user-experience evaluation. These participant experiments led to the specification of custom software requirements for the software implementation of a parametric Typogenetic tool. I explored semi-automated design in application to different design cases using the software prototype of Typogenetic Design. Interactive mass-customisation is a promising application of Typogenetic Design to interactively specify product structure and component composition. The semi-automated design paradigm is one step on the way to moderating the balance between automation and control of computational design systems

    Innovative Wireless Localization Techniques and Applications

    Get PDF
    Innovative methodologies for the wireless localization of users and related applications are addressed in this thesis. In last years, the widespread diffusion of pervasive wireless communication (e.g., Wi-Fi) and global localization services (e.g., GPS) has boosted the interest and the research on location information and services. Location-aware applications are becoming fundamental to a growing number of consumers (e.g., navigation, advertising, seamless user interaction with smart places), private and public institutions in the fields of energy efficiency, security, safety, fleet management, emergency response. In this context, the position of the user - where is often more valuable for deploying services of interest than the identity of the user itself - who. In detail, opportunistic approaches based on the analysis of electromagnetic field indicators (i.e., received signal strength and channel state information) for the presence detection, the localization, the tracking and the posture recognition of cooperative and non-cooperative (device-free) users in indoor environments are proposed and validated in real world test sites. The methodologies are designed to exploit existing wireless infrastructures and commodity devices without any hardware modification. In outdoor environments, global positioning technologies are already available in commodity devices and vehicles, the research and knowledge transfer activities are actually focused on the design and validation of algorithms and systems devoted to support decision makers and operators for increasing efficiency, operations security, and management of large fleets as well as localized sensed information in order to gain situation awareness. In this field, a decision support system for emergency response and Civil Defense assets management (i.e., personnel and vehicles equipped with TETRA mobile radio) is described in terms of architecture and results of two-years of experimental validation

    Arquitectura, técnicas y modelos para posibilitar la Ciencia de Datos en el Archivo de la Misión Gaia

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 26/05/2017.The massive amounts of data that the world produces every day pose new challenges to modern societies in terms of how to leverage their inherent value. Social networks, instant messaging, video, smart devices and scientific missions are just mere examples of the vast number of sources generating data every second. As the world becomes more and more digitalized, new needs arise for organizing, archiving, sharing, analyzing, visualizing and protecting the ever-increasing data sets, so that we can truly develop into a data-driven economy that reduces inefficiencies and increases sustainability, creating new business opportunities on the way. Traditional approaches for harnessing data are not suitable any more as they lack the means for scaling to the larger volumes in a timely and cost efficient manner. This has somehow changed with the advent of Internet companies like Google and Facebook, which have devised new ways of tackling this issue. However, the variety and complexity of the value chains in the private sector as well as the increasing demands and constraints in which the public one operates, needs an ongoing research that can yield newer strategies for dealing with data, facilitate the integration of providers and consumers of information, and guarantee a smooth and prompt transition when adopting these cutting-edge technological advances. This thesis aims at providing novel architectures and techniques that will help perform this transition towards Big Data in massive scientific archives. It highlights the common pitfalls that must be faced when embracing it and how to overcome them, especially when the data sets, their transformation pipelines and the tools used for the analysis are already present in the organizations. Furthermore, a new perspective for facilitating a smoother transition is laid out. It involves the usage of higher-level and use case specific frameworks and models, which will naturally bridge the gap between the technological and scientific domains. This alternative will effectively widen the possibilities of scientific archives and therefore will contribute to the reduction of the time to science. The research will be applied to the European Space Agency cornerstone mission Gaia, whose final data archive will represent a tremendous discovery potential. It will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), providing unprecedented position, parallax and proper motion measurements for about one billion stars. The successful exploitation of this data archive will depend to a large degree on the ability to offer the proper architecture, i.e. infrastructure and middleware, upon which scientists will be able to do exploration and modeling with this huge data set. In consequence, the approach taken needs to enable data fusion with other scientific archives, as this will produce the synergies leading to an increment in scientific outcome, both in volume and in quality. The set of novel techniques and frameworks presented in this work addresses these issues by contextualizing them with the data products that will be generated in the Gaia mission. All these considerations have led to the foundations of the architecture that will be leveraged by the Science Enabling Applications Work Package. Last but not least, the effectiveness of the proposed solution will be demonstrated through the implementation of some ambitious statistical problems that will require significant computational capabilities, and which will use Gaia-like simulated data (the first Gaia data release has recently taken place on September 14th, 2016). These ambitious problems will be referred to as the Grand Challenge, a somewhat grandiloquent name that consists in inferring a set of parameters from a probabilistic point of view for the Initial Mass Function (IMF) and Star Formation Rate (SFR) of a given set of stars (with a huge sample size), from noisy estimates of their masses and ages respectively. This will be achieved by using Hierarchical Bayesian Modeling (HBM). In principle, the HBM can incorporate stellar evolution models to infer the IMF and SFR directly, but in this first step presented in this thesis, we will start with a somewhat less ambitious goal: inferring the PDMF and PDAD. Moreover, the performance and scalability analyses carried out will also prove the suitability of the models for the large amounts of data that will be available in the Gaia data archive.Las grandes cantidades de datos que se producen en el mundo diariamente plantean nuevos retos a la sociedad en términos de cómo extraer su valor inherente. Las redes sociales, mensajería instantánea, los dispositivos inteligentes y las misiones científicas son meros ejemplos del gran número de fuentes generando datos en cada momento. Al mismo tiempo que el mundo se digitaliza cada vez más, aparecen nuevas necesidades para organizar, archivar, compartir, analizar, visualizar y proteger la creciente cantidad de datos, para que podamos desarrollar economías basadas en datos e información que sean capaces de reducir las ineficiencias e incrementar la sostenibilidad, creando nuevas oportunidades de negocio por el camino. La forma en la que se han manejado los datos tradicionalmente no es la adecuada hoy en día, ya que carece de los medios para escalar a los volúmenes más grandes de datos de una forma oportuna y eficiente. Esto ha cambiado de alguna manera con la llegada de compañías que operan en Internet como Google o Facebook, ya que han concebido nuevas aproximaciones para abordar el problema. Sin embargo, la variedad y complejidad de las cadenas de valor en el sector privado y las crecientes demandas y limitaciones en las que el sector público opera, necesitan una investigación continua en la materia que pueda proporcionar nuevas estrategias para procesar las enormes cantidades de datos, facilitar la integración de productores y consumidores de información, y garantizar una transición rápida y fluida a la hora de adoptar estos avances tecnológicos innovadores. Esta tesis tiene como objetivo proporcionar nuevas arquitecturas y técnicas que ayudarán a realizar esta transición hacia Big Data en archivos científicos masivos. La investigación destaca los escollos principales a encarar cuando se adoptan estas nuevas tecnologías y cómo afrontarlos, principalmente cuando los datos y las herramientas de transformación utilizadas en el análisis existen en la organización. Además, se exponen nuevas medidas para facilitar una transición más fluida. Éstas incluyen la utilización de software de alto nivel y específico al caso de uso en cuestión, que haga de puente entre el dominio científico y tecnológico. Esta alternativa ampliará de una forma efectiva las posibilidades de los archivos científicos y por tanto contribuirá a la reducción del tiempo necesario para generar resultados científicos a partir de los datos recogidos en las misiones de astronomía espacial y planetaria. La investigación se aplicará a la misión de la Agencia Espacial Europea (ESA) Gaia, cuyo archivo final de datos presentará un gran potencial para el descubrimiento y hallazgo desde el punto de vista científico. La misión creará el catálogo en tres dimensiones más grande y preciso de nuestra galaxia (la Vía Láctea), proporcionando medidas sin precedente acerca del posicionamiento, paralaje y movimiento propio de alrededor de mil millones de estrellas. Las oportunidades para la explotación exitosa de este archivo de datos dependerán en gran medida de la capacidad de ofrecer la arquitectura adecuada, es decir infraestructura y servicios, sobre la cual los científicos puedan realizar la exploración y modelado con esta inmensa cantidad de datos. Por tanto, la estrategia a realizar debe ser capaz de combinar los datos con otros archivos científicos, ya que esto producirá sinergias que contribuirán a un incremento en la ciencia producida, tanto en volumen como en calidad de la misma. El conjunto de técnicas e infraestructuras innovadoras presentadas en este trabajo aborda estos problemas, contextualizándolos con los productos de datos que se generarán en la misión Gaia. Todas estas consideraciones han conducido a los fundamentos de la arquitectura que se utilizará en el paquete de trabajo de aplicaciones que posibilitarán la ciencia en el archivo de la misión Gaia (Science Enabling Applications). Por último, la eficacia de la solución propuesta se demostrará a través de la implementación de dos problemas estadísticos que requerirán cantidades significativas de cómputo, y que usarán datos simulados en el mismo formato en el que se producirán en el archivo de la misión Gaia (la primera versión de datos recogidos por la misión está disponible desde el día 14 de Septiembre de 2016). Estos ambiciosos problemas representan el Gran Reto (Grand Challenge), un nombre grandilocuente que consiste en inferir una serie de parámetros desde un punto de vista probabilístico para la función de masa inicial (Initial Mass Function) y la tasa de formación estelar (Star Formation Rate) dado un conjunto de estrellas (con una muestra grande), desde estimaciones con ruido de sus masas y edades respectivamente. Esto se abordará utilizando modelos jerárquicos bayesianos (Hierarchical Bayesian Modeling). Enprincipio,losmodelospropuestos pueden incorporar otros modelos de evolución estelar para inferir directamente la función de masa inicial y la tasa de formación estelar, pero en este primer paso presentado en esta tesis, empezaremos con un objetivo algo menos ambicioso: la inferencia de la función de masa y distribución de edades actual (Present-Day Mass Function y Present-Day Age Distribution respectivamente). Además, se llevará a cabo el análisis de rendimiento y escalabilidad para probar la idoneidad de la implementación de dichos modelos dadas las enormes cantidades de datos que estarán disponibles en el archivo de la misión Gaia...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Cognitive Foundations for Visual Analytics

    Full text link

    Emerging Informatics

    Get PDF
    The book on emerging informatics brings together the new concepts and applications that will help define and outline problem solving methods and features in designing business and human systems. It covers international aspects of information systems design in which many relevant technologies are introduced for the welfare of human and business systems. This initiative can be viewed as an emergent area of informatics that helps better conceptualise and design new world-class solutions. The book provides four flexible sections that accommodate total of fourteen chapters. The section specifies learning contexts in emerging fields. Each chapter presents a clear basis through the problem conception and its applicable technological solutions. I hope this will help further exploration of knowledge in the informatics discipline

    Multi-Agent Systems

    Get PDF
    A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
    corecore