258 research outputs found

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Physical database design in document stores

    Get PDF
    Tesi en modalitat de cotutela, Universitat Politècnica de Catalunya i Université libre de BruxellesNoSQL is an umbrella term used to classify alternate storage systems to the traditional Relational Database Management Systems (RDBMSs). Among these, Document stores have gained popularity mainly due to the semi-structured data storage model and the rich query capabilities. They encourage users to use a data-first approach as opposed to a design-first one. Database design on document stores is mainly carried out in a trial-and-error or ad-hoc rule-based manner instead of a formal process such as normalization in an RDBMS. However, these approaches could easily lead to a non-optimal design resulting additional costs in the long run. This PhD thesis aims to provide a novel multi-criteria-based approach to database design in document stores. Most of such existing approaches are based on optimizing query performance. However, other factors include storage requirement and complexity of the stored documents specific to each use case. There is a large solution space of alternative designs due to the different combinations of referencing and nesting of data. Thus, we believe multi-criteria optimization is ideal to solve this problem. To achieve this, we need to address several issues that will enable us to apply multi-criteria optimization for the data design problem. First, we evaluate the impact of alternate storage representations of semi-structured data. There are multiple and equivalent ways to physically represent semi-structured data, but there is a lack of evidence about the potential impact on space and query performance. Thus, we embark on the task of quantifying that precisely for document stores. We empirically compare multiple ways of representing semi-structured data, allowing us to derive a set of guidelines for efficient physical database design considering both JSON and relational options in the same palette. Then, we need a formal canonical model that can represent alternative designs. We propose a hypergraph-based approach for representing heterogeneous datastore designs. We extend and formalize an existing common programming interface to NoSQL systems as hypergraphs. We define design constraints and query transformation rules for representative data store types. Next, we propose a simple query rewriting algorithm and provide a prototype implementation together with storage statistics estimator. Next, we require a formal query cost model to estimate and evaluate query performance on alternative document store designs. Document stores use primitive approaches to query processing, such as relying on the end-user to specify the usage of indexes instead of a formal cost model. But we require a reliable approach to compare alternative designs on how they perform on a specific query. For this, we define a generic storage and query cost model based on disk access and memory allocation. As all document stores carry out data operations in memory, we first estimate the memory usage by considering the characteristics of the stored documents, their access patterns, and memory management algorithms. Then, using this estimation and metadata storage size, we introduce a cost model for random access queries. We validate our work on two well-known document store implementations. The results show that the memory usage estimates have an average precision of 91% and predicted costs are highly correlated to the actual execution times. During this work, we also managed to suggest several improvements to document stores. Finally, we implement the automated database design solution using multi-criteria optimization. We introduce an algebra of transformations that can systematically modify a design of our canonical representation. Then, using them, we implement a local search algorithm driven by a loss function that can propose near-optimal designs with high probability. We compare our prototype against an existing document store data design solution. Our proposed designs have better performance and are more compact with less redundancy.NoSQL descriu sistemes d'emmagatzematge alternatius als tradicionals de gestió de bases de dades relacionals (RDBMS). Entre aquests, els magatzems de documents han guanyat popularitat principalment a causa del model de dades semiestructurat i les riques capacitats de consulta. Animen els usuaris a utilitzar un enfocament de dades primer, en lloc d'un enfocament de disseny primer. El disseny de dades en magatzems de documents es porta a terme principalment en forma d'assaig-error o basat en regles ad-hoc en lloc d'un procés formal i sistemàtic com ara la normalització en un RDBMS. Aquest enfocament condueix fàcilment a un disseny no òptim que generarà costos addicionals a llarg termini. La majoria dels enfocaments existents es basen en l'optimització del rendiment de les consultes. Aquesta tesi pretén, en canvi, proporcionar un nou enfocament basat en diversos criteris per al disseny de bases de dades en magatzems de documents, inclouen el requisit d'espai i la complexitat dels documents emmagatzemats específics per a cada cas d'ús. En general, hi ha un gran espai de solucions de dissenys alternatives. Per tant, creiem que l'optimització multicriteri és ideal per resoldre aquest problema. Per aconseguir-ho, hem d'abordar diversos problemes que ens permetran aplicar l'optimització multicriteri. En primer, avaluem l'impacte de les representacions alternatives de dades semiestructurades. Hi ha maneres múltiples i equivalents de representar dades semiestructurades, però hi ha una manca d'evidència sobre l'impacte potencial en l'espai i el rendiment de les consultes. Així, ens embarquem en la tasca de quantificar-ho. Comparem empíricament múltiples representacions de dades semiestructurades, cosa que ens permet derivar directrius per a un disseny eficient tenint en compte les opcions dels JSON i relacionals alhora. Aleshores, necessitem un model canònic que pugui representar dissenys alternatius i proposem un enfocament basat en hipergrafs. Estenem i formalitzem una interfície de programació comuna existent als sistemes NoSQL com a hipergrafs. Definim restriccions de disseny i regles de transformació de consultes per a tipus de magatzem de dades representatius. A continuació, proposem un algorisme de reescriptura de consultes senzill i proporcionem una implementació juntament amb un estimador d'estadístiques d'emmagatzematge. Els magatzems de documents utilitzen enfocaments primitius per al processament de consultes, com ara confiar en l'usuari final per especificar l'ús d'índexs en lloc d'un model de cost. Conseqüentment, necessitem un model de cost de consulta per estimar i avaluar el rendiment en dissenys alternatius. Per això, definim un model genèric propi basat en l'accés a disc i l'assignació de memòria. Com que tots els magatzems de documents duen a terme operacions de dades a memòria, primer estimem l'ús de la memòria tenint en compte les característiques dels documents emmagatzemats, els seus patrons d'accés i els algorismes de gestió de memòria. A continuació, utilitzant aquesta estimació i la mida d'emmagatzematge de metadades, introduïm un model de costos per a consultes d'accés aleatori. Validem el nostre treball en dues implementacions conegudes. Els resultats mostren que les estimacions d'ús de memòria tenen una precisió mitjana del 91% i els costos previstos estan altament correlacionats amb els temps d'execució reals. Finalment, implementem la solució de disseny automatitzat de bases de dades mitjançant l'optimització multicriteri. Introduïm una àlgebra de transformacions que pot modificar sistemàticament un disseny en la nostra representació canònica. A continuació, utilitzant-la, implementem un algorisme de cerca local impulsat per una funció de pèrdua que pot proposar dissenys gairebé òptims amb alta probabilitat. Comparem el nostre prototip amb una solució de disseny de dades de magatzem de documents existent. Els nostres dissenys proposats tenen un millor rendiment i són més compactes, amb menys redundànciaNoSQL est un terme générique utilisé pour classer les systèmes de stockage alternatifs aux systèmes de gestion de bases de données relationnelles (SGBDR) traditionnels. Au moment de la rédaction de cet article, il existe plus de 200 systèmes NoSQL disponibles qui peuvent être classés en quatre catégories principales sur le modèle de stockage de données : magasins de valeurs-clés, magasins de documents, magasins de familles de colonnes et magasins de graphiques. Les magasins de documents ont gagné en popularité principalement en raison du modèle de stockage de données semi-structuré et des capacités de requêtes riches par rapport aux autres systèmes NoSQL, ce qui en fait un candidat idéal pour le prototypage rapide. Les magasins de documents encouragent les utilisateurs à utiliser une approche axée sur les données plutôt que sur la conception. La conception de bases de données sur les magasins de documents est principalement effectuée par essais et erreurs ou selon des règles ad hoc plutôt que par un processus formel tel que la normalisation dans un SGBDR. Cependant, ces approches pourraient facilement conduire à une conception de base de données non optimale entraînant des coûts supplémentaires de traitement des requêtes, de stockage des données et de refonte. Cette thèse de doctorat vise à fournir une nouvelle approche multicritère de la conception de bases de données dans les magasins de documents. La plupart des approches existantes de conception de bases de données sont basées sur l’optimisation des performances des requêtes. Cependant, d’autres facteurs incluent les exigences de stockage et la complexité des documents stockés spécifique à chaque cas d’utilisation. De plus, il existe un grand espace de solution de conceptions alternatives en raison des différentes combinaisons de référencement et d’imbrication des données. Par conséquent, nous pensons que l’optimisation multicritères est idéale par l’intermédiaire d’une expérience éprouvée dans la résolution de tels problèmes dans divers domaines. Cependant, pour y parvenir, nous devons résoudre plusieurs problèmes qui nous permettront d’appliquer une optimisation multicritère pour le problème de conception de données. Premièrement, nous évaluons l’impact des représentations alternatives de stockage des données semi-structurées. Il existe plusieurs manières équivalentes de représenter physiquement des données semi-structurées, mais il y a un manque de preuves concernant l’impact potentiel sur l’espace et sur les performances des requêtes. Ainsi, nous nous lançons dans la tâche de quantifier cela précisément pour les magasins de documents. Nous comparons empiriquement plusieurs façons de représenter des données semi-structurées, ce qui nous permet de dériver un ensemble de directives pour une conception de base de données physique efficace en tenant compte à la fois des options JSON et relationnelles dans la même palette. Ensuite, nous avons besoin d’un modèle canonique formel capable de représenter des conceptions alternatives. Dans cette mesure, nous proposons une approche basée sur des hypergraphes pour représenter des conceptions de magasins de données hétérogènes. Prenant une interface de programmation commune existante aux systèmes NoSQL, nous l’étendons et la formalisons sous forme d’hypergraphes. Ensuite, nous définissons les contraintes de conception et les règles de transformation des requêtes pour trois types de magasins de données représentatifs. Ensuite, nous proposons un algorithme de réécriture de requête simple à partir d’un algorithme générique dans un magasin de données sous-jacent spécifique et fournissons une implémentation prototype. De plus, nous introduisons un estimateur de statistiques de stockage sur les magasins de données sous-jacents. Enfin, nous montrons la faisabilité de notre approche sur un cas d’utilisation d’un système polyglotte existant ainsi que son utilité dans les calculs de métadonnées et de chemins de requêtes physiques. Ensuite, nous avons besoin d’un modèle de coûts de requêtes formel pour estimer et évaluer les performances des requêtes sur des conceptions alternatives de magasin de documents. Les magasins de documents utilisent des approches primitives du traitement des requêtes, telles que l’évaluation de tous les plans de requête possibles pour trouver le plan gagnant et son utilisation dans les requêtes similaires ultérieures, ou l’appui sur l’usager final pour spécifier l’utilisation des index au lieu d’un modèle de coûts formel. Cependant, nous avons besoin d’une approche fiable pour comparer deux conceptions alternatives sur la façon dont elles fonctionnent sur une requête spécifique. Pour cela, nous définissons un modèle de coûts de stockage et de requête générique basé sur l’accès au disque et l’allocation de mémoire qui permet d’estimer l’impact des décisions de conception. Étant donné que tous les magasins de documents effectuent des opérations sur les données en mémoire, nous estimons d’abord l’utilisation de la mémoire en considérant les caractéristiques des documents stockés, leurs modèles d’accès et les algorithmes de gestion de la mémoire. Ensuite, en utilisant cette estimation et la taille de stockage des métadonnées, nous introduisons un modèle de coûts pour les requêtes à accès aléatoire. Il s’agit de la première tenta ive d’une telle approche au meilleur de notre connaissance. Enfin, nous validons notre travail sur deux implémentations de magasin de documents bien connues : MongoDB et Couchbase. Les résultats démontrent que les estimations d’utilisation de la mémoire ont une précision moyenne de 91% et que les coûts prévus sont fortement corrélés aux temps d’exécution réels. Au cours de ce travail, nous avons réussi à proposer plusieurs améliorations aux systèmes de stockage de documents. Ainsi, ce modèle de coûts contribue également à identifier les discordances entre les implémentations de stockage de documents et leurs attentes théoriques. Enfin, nous implémentons la solution de conception automatisée de bases de données en utilisant l’optimisation multicritères. Tout d’abord, nous introduisons une algèbre de transformations qui peut systématiquement modifier une conception de notre représentation canonique. Ensuite, en utilisant ces transformations, nous implémentons un algorithme de recherche locale piloté par une fonction de perte qui peut proposer des conceptions quasi optimales avec une probabilité élevée. Enfin, nous comparons notre prototype à une solution de conception de données de magasin de documents existante uniquement basée sur le coût des requêtes. Nos conceptions proposées ont de meilleures performances et sont plus compactes avec moins de redondancePostprint (published version

    Library Publishing Toolkit

    Get PDF
    Both public and academic libraries are invested in the creation and distribution of information and digital content. They have morphed from keepers of content into content creators and curators, and seek best practices and efficient workflows with emerging publishing platforms and services. The Library Publishing Toolkit looks at the broad and varied landscape of library publishing through discussions, case studies, and shared resources. From supporting writers and authors in the public library setting to hosting open access journals and books, this collection examines opportunities for libraries to leverage their position and resources to create and provide access to content.The Library Publishing Toolkit is a project funded partially by Bibliographic Databases and Interlibrary Resources Sharing Program funds which are administered and supported by the Rochester Regional Library Council. The toolkit is a united effort between Milne Library at SUNY Geneseo and the Monroe County Library System to identify trends in library publishing, seek out best practices to implement and support such programs, and share the best tools and resources. Our goals include to: Develop strategies libraries can use to identify types of publishing services and content that can be created and curated by libraries. Assess trends in digital content creation and publishing that can be useful in libraries and suggesting potential future projects. Identify efficient workflows for distributing content for free online and with potential for some cost-recovery in print on demand markets. A list of chapters is available in the full record.https://knightscholar.geneseo.edu/idsproject-press/1002/thumbnail.jp

    The University Library System, University of Pittsburgh: How & Why We Publish

    Get PDF
    The University Library System (ULS), University of Pittsburgh began its e-journal publishing program in 2007 and in five years has quickly grown to publish 34 peer-reviewed scholarly research journals. In this chapter, we will describe the rationale for and the genesis of this program to publish new original content, explain how the program evolved, and give insight into what direction it is likely to take in the future. The ULS has built an extensive digital publishing program over the past two decades. Beginning with digitization projects to reformat the ULS’ unique collections, the program now includes well over 100,000 digital objects in over 100 thematic digital collections including photographs, manuscripts, maps, books, journal articles, electronic theses and dissertations, government documents, and other gray literature such as working papers, white papers, and technical reports. The development of the ULS publishing program was driven by a strong and enduring institutional commitment to Open Access to scholarly information. The organization has placed strategic emphasis on leadership in transforming the patterns of scholarly communication and supporting researchers not only in discovering and accessing scholarly information, but in the production and sharing of new knowledge and the creation of original scholarly research. In pursuit of these goals, the ULS has developed a suite of specific tools and techniques to build a highly cost-efficient e-journal publishing program. The ULS provides its publishing partners with a hardware and software platform and associated electronic publishing services using the open source Open Journal Systems (OJS) software developed by the Public Knowledge Project. This platform allows for richly customizable management of all stages of editorial workflow. In addition, OJS sports a number of reader tools to enhance content discovery and use, including multilingual support for both online interfaces and content in many languages, persistent URLs, RSS feeds, tools for bookmarking and sharing articles through social networking sites, full-text searching, and compliance with the Open Archives Initiative Protocol for Metadata Harvesting. Additional services offered by the ULS include consultation on editorial workflow management, software configuration, graphic design services, initial training, online usage statistics, review of all new published issues for metadata quality, and ongoing systems support. The ULS also provides ISSN registration, assigns DOIs, and assists in promotional efforts to establish the journal. Digital preservation is facilitated through LOCKSS. Steps to start up a new scholarly journal are covered. We will also describe common pitfalls to avoid and techniques that help with clear communications and management of mutual expectations between publisher and publishing partners. Quality control is discussed, including careful selection of partners, conducting peer reviews, maintaining academic quality, advising on publishing best practices, and measuring impact. With each passing year and each acquisitions budget cycle, research libraries have more to gain by becoming publishers. By publishing new Open Access content, libraries can not only help meet the most fundamental needs of the researchers they support, but they can simultaneously help transform today’s inflationary cost model for serials. The publication model described in this paper can serve as a guide for libraries wishing to implement similar programs

    Anticipating the Astronaut: Subject Formation in Early American Space Medicine, 1949-1959

    Get PDF
    This project expands the scope of existing Space Race histories of the American astronaut mostly focused on daring test-pilots in the 1960sby examining a prior decade of research conducted by doctors and psychologists in the military field of space medicine on a surprising array of non-test-pilot subjects. Examining the historical, social, cultural, and political dimensions of space medicines pre-NASA work, which began in 1949, reveals two key insights. The first is that the astronaut emerged in the immediate aftermath of World War Two and developed in concert with the Cold War for a decade before NASA began operations. The second is that the kind of person space medicine experts came to consider right for space was not solely determined by the requirements of spacecraft control and environmental systems, but also by cultural ideas about bodies, minds, technology, and extreme environments in post-war American society. Based on research conducted at NASA, USAF, and NARA archives, this study examines four nearly-forgotten but revealing episodes in which non-test-pilot subjects were used to establish standards and practices for astronauts later adopted and adapted by NASA. This projects four main chapters each focus on work with a different type of subject: a young, non-flying airmans week-long ordeal playing the role of astronaut in the first Space Cabin Simulator; a mountain-based study of high-altitude Indigenous people for astronaut acclimatization; the post-flight lives of monkeys Able and Baker, Americas first celebrity space animals; and the Lovelace Woman in Space Program, a comparative study of women pilots for space fitness. Beyond the purely technical problem of Who can survive a spaceflight?, this work developing the astronaut posed a more fundamental but unspoken question about Americans: Who should fight the Cold War? Critically examining space medicines work with these non-test-pilot subjects defamiliarizes the astronaut, recasting this utopian hero of the civilian Space Race as an older Cold War military creation with a surprisingly dystopian origin. Moving beyond space-race mythologizing, or internalist scientific progress narratives, this approach challenges the enduring gendered and racialized vision of the white, male, military pilot at its origin in an effort to demilitarize the astronaut and human ventures in space

    New tools and specification languages for biophysically detailed neuronal network modelling

    Get PDF
    Increasingly detailed data are being gathered on the molecular, electrical and anatomical properties of neuronal systems both in vitro and in vivo. These range from the kinetic properties and distribution of ion channels, synaptic plasticity mechanisms, electrical activity in neurons, and detailed anatomical connectivity within neuronal microcircuits from connectomics data. Publications describing these experimental results often set them in the context of higher level network behaviour. Biophysically detailed computational modelling provides a framework for consolidating these data, for quantifying the assumptions about underlying biological mechanisms, and for ensuring consistency in the explanation of the phenomena across scales. Such multiscale biophysically detailed models are not currently in wide- spread use by the experimental neuroscience community however. Reasons for this include the relative inaccessibility of software for creating these models, the range of specialised scripting languages used by the available simulators, and the difficulty in creating and managing large scale network simulations. This thesis describes new solutions to facilitate the creation, simulation, analysis and reuse of biophysically detailed neuronal models. The graphical application neuroConstruct allows detailed cell and network models to be built in 3D, and run on multiple simulation platforms without detailed programming knowledge. NeuroML is a simulator independent language for describing models containing detailed neuronal morphologies, ion channels, synapses, and 3D network connectivity. New solutions have also been developed for creating and analysing network models at much closer to biological scale on high performance computing platforms. A number of detailed neocortical, cerebellar and hippocampal models have been converted for use with these tools. The tools and models I have developed have already started to be used for original scientific research. It is hoped that this work will lead to a more solid foundation for creating, validating, simulating and sharing ever more realistic models of neurons and networks

    Seventh Annual Workshop on Space Operations Applications and Research (SOAR 1993), volume 1

    Get PDF
    This document contains papers presented at the Space Operations, Applications and Research Symposium (SOAR) Symposium hosted by NASA/Johnson Space Center (JSC) on August 3-5, 1993, and held at JSC Gilruth Recreation Center. SOAR included NASA and USAF programmatic overview, plenary session, panel discussions, panel sessions, and exhibits. It invited technical papers in support of U.S. Army, U.S. Navy, Department of Energy, NASA, and USAF programs in the following areas: robotics and telepresence, automation and intelligent systems, human factors, life support, and space maintenance and servicing. SOAR was concerned with Government-sponsored research and development relevant to aerospace operations. More than 100 technical papers, 17 exhibits, a plenary session, several panel discussions, and several keynote speeches were included in SOAR '93

    Transformation of graphical models to support knowledge transfer

    Get PDF
    Menschliche Experten verfügen über die Fähigkeit, ihr Entscheidungsverhalten flexibel auf die jeweilige Situation abzustimmen. Diese Fähigkeit zahlt sich insbesondere dann aus, wenn Entscheidungen unter beschränkten Ressourcen wie Zeitrestriktionen getroffen werden müssen. In solchen Situationen ist es besonders vorteilhaft, die Repräsentation des zugrunde liegenden Wissens anpassen und Entscheidungsmodelle auf unterschiedlichen Abstraktionsebenen verwenden zu können. Weiterhin zeichnen sich menschliche Experten durch die Fähigkeit aus, neben unsicheren Informationen auch unscharfe Wahrnehmungen in die Entscheidungsfindung einzubeziehen. Klassische entscheidungstheoretische Modelle basieren auf dem Konzept der Rationalität, wobei in jeder Situation die nutzenmaximale Entscheidung einer Entscheidungsfunktion zugeordnet wird. Neuere graphbasierte Modelle wie Bayes\u27sche Netze oder Entscheidungsnetze machen entscheidungstheoretische Methoden unter dem Aspekt der Modellbildung interessant. Als Hauptnachteil lässt sich die Komplexität nennen, wobei Inferenz in Entscheidungsnetzen NP-hart ist. Zielsetzung dieser Dissertation ist die Transformation entscheidungstheoretischer Modelle in Fuzzy-Regelbasen als Zielsprache. Fuzzy-Regelbasen lassen sich effizient auswerten, eignen sich zur Approximation nichtlinearer funktionaler Beziehungen und garantieren die Interpretierbarkeit des resultierenden Handlungsmodells. Die Übersetzung eines Entscheidungsmodells in eine Fuzzy-Regelbasis wird durch einen neuen Transformationsprozess unterstützt. Ein Agent kann zunächst ein Bayes\u27sches Netz durch Anwendung eines in dieser Arbeit neu vorgestellten parametrisierten Strukturlernalgorithmus generieren lassen. Anschließend lässt sich durch Anwendung von Präferenzlernverfahren und durch Präzisierung der Wahrscheinlichkeitsinformation ein entscheidungstheoretisches Modell erstellen. Ein Transformationsalgorithmus kompiliert daraus eine Regelbasis, wobei ein Approximationsmaß den erwarteten Nutzenverlust als Gütekriterium berechnet. Anhand eines Beispiels zur Zustandsüberwachung einer Rotationsspindel wird die Praxistauglichkeit des Konzeptes gezeigt.Human experts are able to flexible adjust their decision behaviour with regard to the respective situation. This capability pays in situations under limited resources like time restrictions. It is particularly advantageous to adapt the underlying knowledge representation and to make use of decision models at different levels of abstraction. Furthermore human experts have the ability to include uncertain information and vague perceptions in decision making. Classical decision-theoretic models are based directly on the concept of rationality, whereby the decision behaviour prescribed by the principle of maximum expected utility. For each observation some optimal decision function prescribes an action that maximizes expected utility. Modern graph-based methods like Bayesian networks or influence diagrams make use of modelling. One disadvantage of decision-theoretic methods concerns the issue of complexity. Finding an optimal decision might become very expensive. Inference in decision networks is known to be NP-hard. This dissertation aimed at combining the advantages of decision-theoretic models with rule-based systems by transforming a decision-theoretic model into a fuzzy rule-based system. Fuzzy rule bases are an efficient implementation from a computational point of view, they can approximate non-linear functional dependencies and they are also intelligible. There was a need for establishing a new transformation process to generate rule-based representations from decision models, which provide an efficient implementation architecture and represent knowledge in an explicit, intelligible way. At first, an agent can apply the new parameterized structure learning algorithm to identify the structure of the Bayesian network. The use of learning approaches to determine preferences and the specification of probability information subsequently enables to model decision and utility nodes and to generate a consolidated decision-theoretic model. Hence, a transformation process compiled a rule base by measuring the utility loss as approximation measure. The transformation process concept has been successfully applied to the problem of representing condition monitoring results for a rotation spindle

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation

    Innovative intelligent sensors to objectively understand exercise interventions for older adults

    Get PDF
    The population of most western countries is ageing and, therefore, the ageing issue now matters more than ever. According to the reports of the United Nations in 2017, there were a total of 15.8 million (26.9%) people over 60 years of age in the United Kindom, and the numbers are projected to reach 23.5 million (31.5%) by 2050. Spending on medical treatment and healthcare for older adults accounts for two-fifths of the UK National Health Service (NHS) budget. Keeping older people healthy is a challenge. In general, exercise is believed to benefit both mental and physical health. Specifically, resistance band exercises are proven by many studies that they have potentially positive effects on both mental and physical health. However, treatment using resistance band exercise is usually done in unmonitored environments, such as at home or in a rehabilitation centre; therefore, the exercise cannot be measured and/or quantified accurately. Despite many years of research, the true effectiveness of resistance band exercises remains unclear. [Continues.]</div
    • …
    corecore