9 research outputs found

    HdSC: modelagem de alto nível para simulação nativa de plataformas com suporte ao desenvolvimento de HdS

    Get PDF
    Com os grandes avanços recentes dos sistemas computacionais, houve a possibilidade de ascensão de dispositivos inovadores, como os modernos telefones celulares e tablets com telas sensíveis ao toque. Para gerenciar adequadamente estas diversas interfaces é necessário utilizar o software dependente do hardware (HdS), que é responsável pelo controle e acesso a estes dispositivos. Além deste complexo arranjo de componentes, para atender a crescente demanda por mais funcionalidades integradas, o paradigma de multiprocessamento vem sendo adotado para aumentar o desempenho das plataformas. Devido à lacuna de produtividade de sistemas, tanto a indústria como a academia têm pesquisado processos mais eficientes para construir e simular sistemas cada vez mais complexos. A premissa dos trabalhos do estado da arte está em trabalhar com modelos com alto nível de abstração e de precisão que permitam ao projetista avaliar rapidamente o sistema, sem ter que depender de lentos e complexos modelos baseados em ISS. Neste trabalho é definido um conjunto de construtores para modelagem de plataformas baseadas em processadores, com suporte para desenvolvimento de HdS e reusabilidade dos componentes, técnicas para estimativa estática de tempo simulado em ambiente nativo de simulação e suporte para plataformas multiprocessadas. Foram realizados experimentos com aplica- ções de entrada e saída intensiva, computação intensiva e multiprocessada, com ganho médio de desempenho da ordem de 1.000 vezes e precisão de estimativas com erro médio inferior a 3%, em comparação com uma plataforma de referência baseada em ISS._________________________________________________________________________________________ ABSTRACT: The amazing advances of computer systems technology enabled the rise of innovative devices, such as modern touch sensitive cell phones and tablets. To properly manage these various interfaces, it is required the use of the Hardwaredependent Software (HdS) that is responsible for these devices control and access. Besides this complex arrangement of components, to meet the growing demand for more integrated features, the multiprocessing paradigm has been adopted to increase the platforms performance. Due to the system design gap, both industry and academia have been researching for more efficient processes to build and simulate systems with this increasingly complexity. The premise of the state of art works is the development of high level of abstraction and precise models to enable the designer to quickly evaluate the system, without having to rely on slow and complex models based on instruction set simulators (ISS). This work defined a set of constructors for processor-based platforms modeling, supporting HdS development and components reusability, techniques for static simulation timing estimation in native environment and support for multiprocessor platforms. Experiments were carried out with input and output intensive, compute intensive and multiprocessed applications leading to an average performance speed up of about 1,000 times and average timing estimation accuracy of less than 3%, when compared with a reference platform based on ISS

    RECONHECIMENTO DE ESTADOS DOS OLHOS UTILIZANDO MÁQUINAS DE APRENDIZADO PROFUNDO A PARTIR DE ONDAS CEREBRAIS

    Get PDF
    The brain-computer interface is one of the emerging fields of human-computer interaction due to its broad spectrum of applications, especially those that deal with human cognition. In this work, electroencephalography (EEG) is used as base data for classifying the state of the eyes (open or closed) by applying Long Short-Term Memory (LSTM) networks and variants. For benchmarking purposes, the EEG data set with the eye state record was used, available in the Machine Learning repository at UCI. The results obtained indicated that the LSTM and GRU bidirectional cells models are applicable to the classification of the data, presenting an accuracy greater than 95%, and that its performance is good compared to the more expensive models computationally.A interface cérebro-computador é um dos campos emergentes da interação homem-computador devido ao seu amplo espectro de aplicações, especialmente as que lidam com a cognição humana. Neste trabalho, a eletroencefalografia (EEG) é usada como dado base para classificar o estado dos olhos (abertos ou fechados) aplicando redes Long Short-Term Memory (LSTM) e variantes. Para fins de benchmarking, foi utilizado o conjunto de dados de EEG com registro do estado do olho, disponível no repositório de Aprendizado de Máquina da UCI. Os resultados obtidos indicaram que os modelos bidirecionais das células LSTM e GRU são aplicáveis na classificação dos dados, apresentando acurácia superior a 95%, e que seu desempenho é bom comparado aos modelos mais caros computacionalmente

    Pervasive gaps in Amazonian ecological research

    Get PDF

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear un derstanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5–7 vast areas of the tropics remain understudied.8–11 In the American tropics, Amazonia stands out as the world’s most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepre sented in biodiversity databases.13–15 To worsen this situation, human-induced modifications16,17 may elim inate pieces of the Amazon’s biodiversity puzzle before we can use them to understand how ecological com munities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple or ganism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region’s vulnerability to environmental change. 15%–18% of the most ne glected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lostinfo:eu-repo/semantics/publishedVersio

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5,6,7 vast areas of the tropics remain understudied.8,9,10,11 In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepresented in biodiversity databases.13,14,15 To worsen this situation, human-induced modifications16,17 may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5,6,7 vast areas of the tropics remain understudied.8,9,10,11 In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepresented in biodiversity databases.13,14,15 To worsen this situation, human-induced modifications16,17 may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost

    IVM: uma metodologia de verificação funcional interoperável, iterativa e incremental

    Get PDF
    A crescente demanda por produtos eletrônicos e a capacidade cada vez maior de integração criaram sistemas extremamente complexos em chips, conhecidos como Systemon-Chip ou SoC. Seguindo em sentido oposto a esta tendência, os prazos (time-to-market) para que estes sistemas sejam construídos vem continuamente sendo reduzidos, obrigando que muito mais funcionalidades sejam implementadas em períodos cada vez menores de tempo. A necessidade de um maior controle de qualidade do produto final demanda a atividade de Verificação Funcional que consiste em utilizar um conjuntos de técnicas para estimular o sistema em busca de falhas. Esta atividade é a extremamente dispendiosa e necessária, consumindo até cerca de 80% do custo final do produto. É neste contexto que se insere este trabalho, propondo uma metodologia de Verificação Funcional chamada IVM que irá fornecer todos os subsídios para garantir a entrega de sistemas de alta qualidade, e ainda atingindo as rígidas restrições temporais impostas pelo mercado. Sendo baseado em metodologias já bastante difundidas e acreditadas, como o OVM e o VeriSC, o IVM definiu uma organização arquitetural e um fluxo de atividades que incorporou as principais características de ambas as abordagens que antes estavam disjuntas. Esta integração de técnicas e conceitos resulta em um fluxo de verificação mais eficiente, permitindo que sistemas atinjam o custo, prazo e qualidade esperados._________________________________________________________________________________________ ABSTRACT: The growing demand for electronic devices and its even higher integration capability created extremely complex systems in chips, known as System-on-Chip or SoC. In a opposite way to this tendency, the time-to-market for these systems be built have been continually reduced, forcing much more functionalities be implemented in even shorten time periods. The final product quality control is assured by the Functional Verification activity that consists in a set of techniques to stimulate a system in order to find bugs. This activity is extremely expensive and necessary, responding to around 80% of final product cost. In this context this work is inserted on, proposing a Functional Verification methodology called IVM that will provide all conditions to deliver high quality systems, while keeping the hard time restrictions imposed by the market. Based in well known and trusted methodologies, as OVM and VeriSC, the IVM defined an architectural organization and an activity flow that incorporates features of both approaches that were separated from each other. This techniques and concepts integration resulted in a more efficient verification flow, allowing systems to meet the desired budget, schedule and quality
    corecore