830,923 research outputs found

    IIASA Conference '76

    Get PDF
    The IIASA Conference '76 was the first General Conference of the International Institute For Applied Systems Analysis. It provided a major forum for review of the creation, development, research, and future role of IIASA. Thus, these proceedings provide a comprehensive report on the first three years of the Institute. The proceedings appear in two volumes. Volume 1 contains presentations, comments. and discussions on the concept of IIASA, its creation and research strategy, its studies of global issues (including energy, food, and global development), its work on regional issues, the relationship between analysis and policy-making, and the role of policy analysis in an international setting. The invited comments of the National Member Organizations on the development of the Institute and on the Conference also appear in Volume 1. A list of participants and the table of contents for Volume 2 are appended. Volume 2 contains presentations. comments, and discussions of the research areas: Resources and Environment, Human Settlements and Services. Management and Technology, and System and Decision Sciences. The table of contents of Volume 1 is appended. Each volume contains brief biographies of the authors of presentations in that volume. Recognition is due to the Council of the Institute and the Austrian Government for their significant contribution to the development of the Institute; to the participants of the Conference for their support and guidance; and to the IIASA staff for the spirit and effort devoted to the Conference and the preparation of the proceedings

    Artificial Intelligence and medicine

    Get PDF
    Ao mesmo tempo em que se discutem problemas na relação médico-paciente e a deficiência do exame clínico na atenção médica, que torna o diagnóstico clínico mais dependente de exames complementares, enfatiza-se cada vez mais a importância do computador em medicina e na saúde pública. Isto se dá seja pela adoção de sistemas de apoio à decisão clínica, seja pelo uso integrado de novas tecnologias, incluindo as tecnologias vestíveis/corporais (wearable devices), seja pelo armazenamento de grandes volumes de dados de saúde de pacientes e da população. A capacidade de armazenamento e processamento de dados aumentou exponencialmente ao longo dos recentes anos, criando o conceito de big data. A Inteligência Artificial processa esses dados por meio de algoritmos, que tendem a se aperfeiçoar pelo seu próprio funcionamento (self learning) e a propor hipóteses diagnósticas cada vez mais precisas. Sistemas computadorizados de apoio à decisão clínica, processando dados de pacientes, têm indicado diagnósticos com elevado nível de acurácia. O supercomputador da IBM, denominado Watson, armazenou um volume extraordinário de informações em saúde, criando redes neurais de processamento de dados em vários campos, como a oncologia e a genética. Watson assimilou dezenas de livros-textos em medicina, toda a informação do PubMed e Medline, e milhares de prontuários de pacientes do Sloan Kettering Memorial Cancer Hospital. Sua rede de oncologia é hoje consultada por especialistas de um grande número de hospitais em todo o mundo. O supercomputador inglês Deep Mind, da Google, registrou informações de 1,6 milhão de pacientes atendidos no National Health Service (NHS), permitindo desenvolver novos sistemas de apoio à decisão clínica, analisando dados desses pacientes, permitindo gerar alertas sobre a sua evolução, evitando medicações contraindicadas ou conflitantes e informando tempestivamente os profissionais de saúde sobre seus pacientes. O Deep Mind, ao avaliar um conjunto de imagens dermatológicas na pesquisa de melanoma, mostrou um desempenho melhor do que o de especialistas (76% versus 70,5%), com uma especificidade de 62% versus 59% e uma sensibilidade de 82%. Mas se o computador fornece o know-what, caberá ao médico discutir o problema de saúde e suas possíveis soluções com o paciente, indicando o know-why do seu caso. Isto requer uma contínua preocupação com a qualidade da educação médica, enfatizando o conhecimento da fisiopatologia dos processos orgânicos e o desenvolvimento das habilidades de ouvir, examinar e orientar um paciente e, consequentemente, propor um diagnóstico e um tratamento de seu problema de saúde, acompanhando sua evolução.While discussions develop regarding problems in the doctor-patient relationship and the deficiency of the clinical examination in medical practice, which leaves diagnoses more dependent of complementary tests, the importance of the computer in medicine and public health is highlighted. This is happening, either through the adoption of clinical decision support systems, the use of new technologies, such as wearable devices, or the storage and processing of large volumes of patient and population data. Data storage and processing capacity has increased exponentially over recent years, creating the concept of “big data”. Artificial Intelligence processes such data using algorithms that continually improve through intrinsic self-learning, thus proposing increasingly precise diagnostic hypotheses. Computerized clinical decision support systems, analyzing patient data, have achieved a high degree of accuracy in their diagnoses. IBM’s supercomputer, named “Watson”, has stored an extraordinary volume of health information, creating a neural network of data processing in several fields, such as oncology and genetics. Watson has assimilated dozens of medical textbooks, all the information from PubMed and Medline, and thousands of medical records from the Sloan Kettering Cancer Memorial Hospital. Its oncology network is now consulted by numerous specialists from all over the world. The English supercomputer Deep-Mind, by Google, has stored data from 1.6 million National Health Service patients, enabling the development of new clinical decision support systems, analysis of these patient data and generating alerts on their evolution in order to avoid contraindicated or conflicting medications, whilst also sending timely updates to the physicians about the health of their patients. Analyzing a set of dermatological images in a melanoma study, Deep-Mind showed a higher level of performance than that of specialists (76% versus 70.5%), with a specificity of 62% versus 59% and a sensitivity of 82%. Nevertheless, whereas the computer provides the know-what, it is the physician that will discuss the medical problem and the possible solutions with the patient, indicating the know-why of his or her case. This area requires continuous focus on the quality of medical training, emphasizing knowledge of the physiopathology of the organic processes and the development of the abilities to listen to, examine and advise a patient and, consequently, propose a diagnosis and treatment, accompanying his or her evolution

    A multicriteria analysis of photovoltaic systems: Energetic, environmental, and economic assessments

    Get PDF
    The development of photovoltaic (PV) energy has led to rising efficiencies, better reliability, and falling prices. A multicriteria analysis (MCA) of PV systems is proposed in this paper in order to evaluate the sustainability of alternative projects. The investigations are presented using multiple indicators: Energy Payback Time (EPBT), Energy Return on Investment (EROI), Greenhouse Gas per kilowatt-hour (GHG/kWh), Greenhouse Gas Payback Time (GPBT), Greenhouse Gas Return on Investment (GROI), Net Present Value (NPV), Discounted Payback Time (DPBT), and Discounted Aggregate Cost Benefit (D(B/C) A). PV energy is a relevant player in global electricity market and can have a key-role in sustainable growth

    A comparison of reimbursement recommendations by European HTA agencies : Is there opportunity for further alignment?

    Get PDF
    Introduction: In Europe and beyond, the rising costs of healthcare and limited healthcare resources have resulted in the implementation of health technology assessment (HTA) to inform health policy and reimbursement decision-making. European legislation has provided a harmonized route for the regulatory process with the European Medicines Agency, but reimbursement decision-making still remains the responsibility of each country. There is a recognized need to move toward a more objective and collaborative reimbursement environment for new medicines in Europe. Therefore, the aim of this study was to objectively assess and compare the national reimbursement recommendations of 9 European jurisdictions following European Medicines Agency (EMA) recommendation for centralized marketing authorization. Methods: Using publicly available data and newly developed classification tools, this study appraised 9 European reimbursement systems by assessing HTA processes and the relationship between the regulatory, HTA and decision-making organizations. Each national HTA agency was classified according to two novel taxonomies. The System taxonomy, focuses on the position of the HTA agency within the national reimbursement system according to the relationship between the regulator, the HTA-performing agency, and the reimbursement decision-making coverage body. The HTA Process taxonomy distinguishes between the individual HTA agency's approach to economic and therapeutic evaluation and the inclusion of an independent appraisal step. The taxonomic groups were subsequently compared with national HTA recommendations. Results: This study identified European national reimbursement recommendations for 102 new active substances (NASs) approved by the EMA from 2008 to 2012. These reimbursement recommendations were compared using a novel classification tool and identified alignment between the organizational structure of reimbursement systems (System taxonomy) and HTA recommendations. However, there was less alignment between the HTA processes and recommendations. Conclusions: In order to move forward to a more harmonized HTA environment within Europe, it is first necessary to understand the variation in HTA practices within Europe. This study has identified alignment between HTA recommendations and the System taxonomy and one of the major implications of this study is that such alignment could support a more collaborative HTA environment in Europe.Peer reviewedFinal Published versio

    Agent-based modeling: a systematic assessment of use cases and requirements for enhancing pharmaceutical research and development productivity.

    Get PDF
    A crisis continues to brew within the pharmaceutical research and development (R&D) enterprise: productivity continues declining as costs rise, despite ongoing, often dramatic scientific and technical advances. To reverse this trend, we offer various suggestions for both the expansion and broader adoption of modeling and simulation (M&S) methods. We suggest strategies and scenarios intended to enable new M&S use cases that directly engage R&D knowledge generation and build actionable mechanistic insight, thereby opening the door to enhanced productivity. What M&S requirements must be satisfied to access and open the door, and begin reversing the productivity decline? Can current methods and tools fulfill the requirements, or are new methods necessary? We draw on the relevant, recent literature to provide and explore answers. In so doing, we identify essential, key roles for agent-based and other methods. We assemble a list of requirements necessary for M&S to meet the diverse needs distilled from a collection of research, review, and opinion articles. We argue that to realize its full potential, M&S should be actualized within a larger information technology framework--a dynamic knowledge repository--wherein models of various types execute, evolve, and increase in accuracy over time. We offer some details of the issues that must be addressed for such a repository to accrue the capabilities needed to reverse the productivity decline

    The necessities for building a model to evaluate Business Intelligence projects- Literature Review

    Full text link
    In recent years Business Intelligence (BI) systems have consistently been rated as one of the highest priorities of Information Systems (IS) and business leaders. BI allows firms to apply information for supporting their processes and decisions by combining its capabilities in both of organizational and technical issues. Many of companies are being spent a significant portion of its IT budgets on business intelligence and related technology. Evaluation of BI readiness is vital because it serves two important goals. First, it shows gaps areas where company is not ready to proceed with its BI efforts. By identifying BI readiness gaps, we can avoid wasting time and resources. Second, the evaluation guides us what we need to close the gaps and implement BI with a high probability of success. This paper proposes to present an overview of BI and necessities for evaluation of readiness. Key words: Business intelligence, Evaluation, Success, ReadinessComment: International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.2, April 201
    corecore