1,159 research outputs found

    Development, Operation, and Results From the Texas Automated Buoy System

    Get PDF
    The Texas Automated Buoy System (TABS) is a coastal network of moored buoys that report near-real-time observations about currents and winds along the Texas coast. Established in 1995, the primary mission of TABS is ocean observations in the service of oil spill preparedness and response. The state of Texas funded the system with the intent of improving the data available to oil spill trajectory modelers. In its 12 years of operation, TABS has proven its usefulness during realistic oil spill drills and actual spills. The original capabilities of TABS, i.e., measurement of surface currents and temperatures, have been extended to the marine surface layer, the entire water column, and the sea floor. In addition to observations, a modeling component has been integrated into the TABS program. The goal is to form the core of a complete ocean observing system for Texas waters. As the nation embarks on the development of an integrated ocean observing system, TABS will continue to be an active participant of the Gulf of Mexico Coastal Ocean Observing System (GCOOS) regional association and the primary source of near-surface current measurements in the northwestern Gulf of Mexico. This article describes the origin of TABS, the philosophy behind the operation and development of the system, the resulting modifications to improve the system, the expansion of the system to include new sensors, the development of TABS forecasting models and real-time analysis tools, and how TABS has met many of the societal goals envisioned for GCOOS

    Project portfolio evaluation and selection using mathematical programming and optimization methods

    Get PDF
    Project portfolio selection is an essential process for portfolio management and plays an important role in accomplishing organizational goals. This research explores the feasibility of developing a project portfolio selection tool by using mathematical programming and optimization models, specifically 0-1 integer programming (one objective portfolio) and goal programming (multiple objectives portfolio). These methods select the set of projects which deliver the maximum benefit (e.g., net present value, profit, etc.) represented for objective functions subjected to a series of constraints (e.g., technical requirements and/or resources availability) considering the scheduling of selected projects in a planning horizon, interdependence relationship among projects (e.g., complementary projects and mutually exclusive projects) and especial cases like mandatory and ongoing projects. ^ Based on the proposed model, a Decision Support System (DSS) will be developed and tested for accuracy, flexibility and ease of use. This computational tool will be designed for decision makers and users that are not familiar with mathematical programming models

    Improving intrusion detection systems using data mining techniques

    Get PDF
    Recent surveys and studies have shown that cyber-attacks have caused a lot of damage to organisations, governments, and individuals around the world. Although developments are constantly occurring in the computer security field, cyber-attacks still cause damage as they are developed and evolved by hackers. This research looked at some industrial challenges in the intrusion detection area. The research identified two main challenges; the first one is that signature-based intrusion detection systems such as SNORT lack the capability of detecting attacks with new signatures without human intervention. The other challenge is related to multi-stage attack detection, it has been found that signature-based is not efficient in this area. The novelty in this research is presented through developing methodologies tackling the mentioned challenges. The first challenge was handled by developing a multi-layer classification methodology. The first layer is based on decision tree, while the second layer is a hybrid module that uses two data mining techniques; neural network, and fuzzy logic. The second layer will try to detect new attacks in case the first one fails to detect. This system detects attacks with new signatures, and then updates the SNORT signature holder automatically, without any human intervention. The obtained results have shown that a high detection rate has been obtained with attacks having new signatures. However, it has been found that the false positive rate needs to be lowered. The second challenge was approached by evaluating IP information using fuzzy logic. This approach looks at the identity of participants in the traffic, rather than the sequence and contents of the traffic. The results have shown that this approach can help in predicting attacks at very early stages in some scenarios. However, it has been found that combining this approach with a different approach that looks at the sequence and contents of the traffic, such as event- correlation, will achieve a better performance than each approach individually

    Adaptive Computing Systems for Aerospace

    Get PDF
    RÉSUMÉ En raison de leur complexitĂ© croissante, les systĂšmes informatiques modernes nĂ©cessitent de nouvelles mĂ©thodologies permettant d’automatiser leur conception et d’amĂ©liorer leurs performances. L’espace, en particulier, constitue un environnement trĂšs dĂ©favorable au maintien de la performance de ces systĂšmes : sans protection des rayonnements ionisants et des particules, l’électronique basĂ©e sur CMOS peut subir des erreurs transitoires, une dĂ©gradation des performances et une usure accĂ©lĂ©rĂ©e causant ultimement une dĂ©faillance du systĂšme. Les approches traditionnellement adoptees pour garantir la fiabilitĂ© du systĂšme et prolonger sa durĂ©e de vie sont basĂ©es sur la redondance, gĂ©nĂ©ralement Ă©tablie durant la conception. En revanche, ces solutions sont coĂ»teuses et parfois inefficaces, puisqu'elles augmentent la taille et la complexitĂ© du systĂšme, l'exposant Ă  des risques plus Ă©levĂ©s de surchauffe et d'erreurs. Les consĂ©quences de ces limites sont d'autant plus importantes lorsqu'elles s’appliquent aux systĂšmes critiques (e.g., contraintes par le temps ou dont l’accĂšs est limitĂ©) qui doivent ĂȘtre en mesure de prendre des dĂ©cisions sans intervention humaine. Sur la base de ces besoins et limites, le dĂ©veloppement en aĂ©rospatial de systĂšmes informatiques avec capacitĂ©s adaptatives peut ĂȘtre considĂ©rĂ© comme la solution la plus appropriĂ©e pour les dispositifs intĂ©grĂ©s Ă  haute performance. L’informatique auto-adaptative offre un potentiel sans Ă©gal pour assurer la crĂ©ation d’une gĂ©nĂ©ration d’ordinateurs plus intelligents et fiables. Qui plus est, elle rĂ©pond aux besoins modernes de concevoir et programmer des systĂšmes informatiques capables de rĂ©pondre Ă  des objectifs en conflit. En nous inspirant des domaines de l’intelligence artificielle et des systĂšmes reconfigurables, nous aspirons Ă  dĂ©velopper des systĂšmes informatiques auto-adaptatifs pour l’aĂ©rospatiale qui rĂ©pondent aux enjeux et besoins actuels. Notre objectif est d’amĂ©liorer l’efficacitĂ© de ces systĂšmes, leur tolerance aux pannes et leur capacitĂ© de calcul. Afin d’atteindre cet objectif, une analyse expĂ©rimentale et comparative des algorithmes les plus populaires pour l’exploration multi-objectifs de l’espace de conception est d’abord effectuĂ©e. Les algorithmes ont Ă©tĂ© recueillis suite Ă  une revue de la plus rĂ©cente littĂ©rature et comprennent des mĂ©thodes heuristiques, Ă©volutives et statistiques. L’analyse et la comparaison de ceux-ci permettent de cerner les forces et limites de chacun et d'ainsi dĂ©finir des lignes directrices favorisant un choix optimal d’algorithmes d’exploration. Pour la crĂ©ation d’un systĂšme d’optimisation autonome—permettant le compromis entre plusieurs objectifs—nous exploitons les capacitĂ©s des modĂšles graphiques probabilistes. Nous introduisons une mĂ©thodologie basĂ©e sur les modĂšles de Markov cachĂ©s dynamiques, laquelle permet d’équilibrer la disponibilitĂ© et la durĂ©e de vie d’un systĂšme multiprocesseur. Ceci est obtenu en estimant l'occurrence des erreurs permanentes parmi les erreurs transitoires et en migrant dynamiquement le calcul sur les ressources supplĂ©mentaires en cas de dĂ©faillance. La nature dynamique du modĂšle rend celui-ci adaptable Ă  diffĂ©rents profils de mission et taux d’erreur. Les rĂ©sultats montrent que nous sommes en mesure de prolonger la durĂ©e de vie du systĂšme tout en conservant une disponibilitĂ© proche du cas idĂ©al. En raison des contraintes de temps rigoureuses imposĂ©es par les systĂšmes aĂ©rospatiaux, nous Ă©tudions aussi l’optimisation de la tolĂ©rance aux pannes en prĂ©sence d'exigences d’exĂ©cution en temps rĂ©el. Nous proposons une mĂ©thodologie pour amĂ©liorer la fiabilitĂ© du calcul en prĂ©sence d’erreurs transitoires pour les tĂąches en temps rĂ©el d’un systĂšme multiprocesseur homogĂšne avec des capacitĂ©s de rĂ©glage de tension et de frĂ©quence. Dans ce cadre, nous dĂ©finissons un nouveau compromis probabiliste entre la consommation d’énergie et la tolĂ©rance aux erreurs. Comme nous reconnaissons que la rĂ©silience est une propriĂ©tĂ© d’intĂ©rĂȘt omniprĂ©sente (par exemple, pour la conception et l’analyse de systems complexes gĂ©nĂ©riques), nous adaptons une dĂ©finition formelle de celle-ci Ă  un cadre probabiliste dĂ©rivĂ© Ă  nouveau de modĂšles de Markov cachĂ©s. Ce cadre nous permet de modĂ©liser de façon rĂ©aliste l’évolution stochastique et l’observabilitĂ© partielle des phĂ©nomĂšnes du monde rĂ©el. Nous proposons un algorithme permettant le calcul exact efficace de l’étape essentielle d’infĂ©rence laquelle est requise pour vĂ©rifier des propriĂ©tĂ©s gĂ©nĂ©riques. Pour dĂ©montrer la flexibilitĂ© de cette approche, nous la validons, entre autres, dans le contexte d’un systĂšme informatisĂ© reconfigurable pour l’aĂ©rospatiale. Enfin, nous Ă©tendons la portĂ©e de nos recherches vers la robotique et les systĂšmes multi-agents, deux sujets dont la popularitĂ© est croissante en exploration spatiale. Nous abordons le problĂšme de l’évaluation et de l’entretien de la connectivitĂ© dans le context distribuĂ© et auto-adaptatif de la robotique en essaim. Nous examinons les limites des solutions existantes et proposons une nouvelle mĂ©thodologie pour crĂ©er des gĂ©omĂ©tries complexes connectĂ©es gĂ©rant plusieurs tĂąches simultanĂ©ment. Des contributions additionnelles dans plusieurs domaines sont rĂ©sumĂ©s dans les annexes, nommĂ©ment : (i) la conception de CubeSats, (ii) la modĂ©lisation des rayonnements spatiaux pour l’injection d’erreur dans FPGA et (iii) l’analyse temporelle probabiliste pour les systĂšmes en temps rĂ©el. À notre avis, cette recherche constitue un tremplin utile vers la crĂ©ation d’une nouvelle gĂ©nĂ©ration de systĂšmes informatiques qui exĂ©cutent leurs tĂąches d’une façon autonome et fiable, favorisant une exploration spatiale plus simple et moins coĂ»teuse.----------ABSTRACT Today's computer systems are growing more and more complex at a pace that requires the development of novel and more effective methodologies to automate their design. Space, in particular, represents a challenging environment: without protection from ionizing and particle radiation, CMOS-based electronics are subject to transients faults, performance degradation, accelerated wear, and, ultimately, system failure. Traditional approaches adopted to guarantee reliability and extended lifetime are based on redundancy that is established at design-time. These solutions are expensive and sometimes inefficient, as they increase the complexity and size of a system, exposing it to higher risks of overheating and incurring in radiation-induced errors. Moreover, critical systems---e.g., time-constrained ones and those where access is limited---must be able to cope with pivotal situations without relying on human intervention. Hence, the emerging interest in computer systems with adaptive capabilities as the most suitable solution for novel high-performance embedded devices for aerospace. Self-adaptive computing carries unmatched potential and great promises for the creation of a new generation of smart, more reliable computers, and it addresses the challenge of designing and programming modern and future computer systems that must meet conflicting goals. Drawing from the fields of artificial intelligence and reconfigurable systems, we aim at developing self-adaptive computer systems for aerospace. Our goal is to improve their efficiency, fault-tolerance, and computational capabilities. The first step in this research is the experimental analysis of the most popular multi-objective design-space exploration algorithms for high-level design. These algorithms were collected from the recent literature and include heuristic, evolutionary, and statistical methods. Their comparison provides insights that we use to define guidelines for the choice of the most appropriate optimization algorithms, given the features of the design space. For the creation of a self-managing optimization framework---enabling the adaptive trade-off of multiple objectives---we leverage the tools of probabilistic graphical models. We introduce a mechanism based on dynamic hidden Markov models that balances the availability and lifetime of multiprocessor systems. This is achieved by estimating the occurrence of permanent faults amid transient faults, and by dynamically migrating the computation on excess resources, when failure occurs. The dynamic nature of the model makes it adjustable to different mission profiles and fault rates. The results show that we are able to lead systems to extended lifetimes, while keeping their availability close to ideal. On account of the stringent timing constraints imposed by aerospace systems, we then investigate the optimization of fault-tolerance under real-time requirements. We propose a methodology to improve the reliability of computation in the presence of transient errors when considering the mapping of real-time tasks on a homogeneous multiprocessor system with voltage and frequency scaling capabilities. In this framework, we take advantage of probability theory to define a novel trade-off between power consumption and fault-tolerance. As we recognize that resilience is a pervasive property of interest (e.g., for the design and analysis of generic complex systems), we adapt a formal definition of it to one more probabilistic framework derived from hidden Markov models. This allows us to realistically model the stochastic evolution and partial observability of complex real-world environments. Within this framework, we propose an efficient algorithm for the exact computation of the essential inference step required to construct generic property checking. To demonstrate the flexibility of this approach, we validate it in the context, among others, of a self-aware, reconfigurable computing system for aerospace. Finally, we move the scope of our research towards robotics and multi-agent systems: a topic of thriving popularity for space exploration. We tackle the problem of connectivity assessment and maintenance in the distributed and self-adaptive context of swarm robotics. We review the limitations of existing solutions and propose a novel methodology to create connected complex geometries for multiple task coverage. Additional contributions in the areas of (i) CubeSat design, (ii) the modelling of space radiation for FPGA fault-injection, and (iii) probabilistic timing analysis for real-time systems are summarized in the appendices. In the author's opinion, this research provides a number of useful stepping stones for the creation of a new generation of computing systems that autonomously---and reliably---perform their tasks for longer periods of time, fostering simpler and cheaper space exploration

    Earth resources: A continuing bibliography with indexes (issue 47)

    Get PDF
    This bibliography lists 524 reports, articles and other documents introduced into the NASA scientific and technical information system between July 1 and September 30, 1985. Emphasis is placed on the use of remote sensing and geophysical instrumentation in spacecraft and aircraft to survey and inventory natural resources and urban areas. Subject matter is grouped according to agriculture and forestry, environmental changes and cultural resources, geodesy and cartography, geology and mineral resources, hydrology and water management, data processing and distribution systems, instrumentation and sensors, and economical analysis

    Hidden Markov modelling of movement data from fish

    Get PDF

    Doing augmented reality: a discourse analysis

    Get PDF
    Since its emergence in the 90s, augmented reality has been referred to as the superimposition of virtual objects on the view of the physical world, and holds a promise to fundamentally change the way we interact with the digital universe. However, AR was never able to achieve such objectives, and after the Google Glass Experiment in 2013, what is evident is a conflict between the research visions and consumer expectations. Seeing that reality is done through discursive practices and enactments, this thesis examines how AR is done by those who are involved in its development, its promotion and its use, through textual and discourse analysis, aiming to understand the vision behind its development. We first analyse the emergence and development of AR, following with the analysis of the Google Glass Experiment, as a materialization of the technology. Taking into consideration the potential for behavior and lifestyle change held in AR development, understanding the underlying discourses and visions is of crucial importance as we bring technology to our reality. As a result we have a better assessment of the current technology situation as well as insights for future development and solutions in the field of augmented reality.Desde o seu surgimento nos anos 90, a realidade aumentada (RA) tem sido referida como a sobreposição de objetos virtuais na visão do mundo físico, e tem consigo a promessa de mudar fundamentalmente a maneira como interagimos com o universo digital. No entanto, a realidade aumentada nunca foi capaz de atingir tais objetivos e, após o Google Glass Experiment em 2013, o que fica evidente é um conflito entre as visões da pesquisa e as expectativas do consumidor. Vendo que a realidade se faz por meio de práticas e decretos discursivos, esta tese examina como a RA é construída por aqueles que estão envolvidos em seu desenvolvimento, sua promoção e seu uso, por meio da análise textual e do discurso, com o objetivo de compreender a visão por trás de seu desenvolvimento. Analisamos primeiro o surgimento e desenvolvimento da RA, seguindo com a análise do Google Glass Experiment, como uma materialização da tecnologia. Levando em consideração o potencial de mudança de comportamento e estilo de vida no desenvolvimento de RA, entender os discursos e visões subjacentes é de importância crucial à medida que trazemos a tecnologia para nossa realidade. Como resultado, temos uma melhor avaliação da situação atual da tecnologia, bem como percepções para o desenvolvimento futuro e soluções no campo da realidade aumentada

    Fault Recovery in Swarm Robotics Systems using Learning Algorithms

    Get PDF
    When faults occur in swarm robotic systems they can have a detrimental effect on collective behaviours, to the point that failed individuals may jeopardise the swarm's ability to complete its task. Although fault tolerance is a desirable property of swarm robotic systems, fault recovery mechanisms have not yet been thoroughly explored. Individual robots may suffer a variety of faults, which will affect collective behaviours in different ways, therefore a recovery process is required that can cope with many different failure scenarios. In this thesis, we propose a novel approach for fault recovery in robot swarms that uses Reinforcement Learning and Self-Organising Maps to select the most appropriate recovery strategy for any given scenario. The learning process is evaluated in both centralised and distributed settings. Additionally, we experimentally evaluate the performance of this approach in comparison to random selection of fault recovery strategies, using simulated collective phototaxis, aggregation and foraging tasks as case studies. Our results show that this machine learning approach outperforms random selection, and allows swarm robotic systems to recover from faults that would otherwise prevent the swarm from completing its mission. This work builds upon existing research in fault detection and diagnosis in robot swarms, with the aim of creating a fully fault-tolerant swarm capable of long-term autonomy
    • 

    corecore