2,472 research outputs found

    AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0

    Get PDF
    The advancements in human-centered artificial intelligence (HCAI) systems for Industry 5.0 is a new phase of industrialization that places the worker at the center of the production process and uses new technologies to increase prosperity beyond jobs and growth. HCAI presents new objectives that were unreachable by either humans or machines alone, but this also comes with a new set of challenges. Our proposed method accomplishes this through the knowlEdge architecture, which enables human operators to implement AI solutions using a zero-touch framework. It relies on containerized AI model training and execution, supported by a robust data pipeline and rounded off with human feedback and evaluation interfaces. The result is a platform built from a number of components, spanning all major areas of the AI lifecycle. We outline both the architectural concepts and implementation guidelines and explain how they advance HCAI systems and Industry 5.0. In this article, we address the problems we encountered while implementing the ideas within the edge-to-cloud continuum. Further improvements to our approach may enhance the use of AI in Industry 5.0 and strengthen trust in AI systems

    Digitalization and Development

    Get PDF
    This book examines the diffusion of digitalization and Industry 4.0 technologies in Malaysia by focusing on the ecosystem critical for its expansion. The chapters examine the digital proliferation in major sectors of agriculture, manufacturing, e-commerce and services, as well as the intermediary organizations essential for the orderly performance of socioeconomic agents. The book incisively reviews policy instruments critical for the effective and orderly development of the embedding organizations, and the regulatory framework needed to quicken the appropriation of socioeconomic synergies from digitalization and Industry 4.0 technologies. It highlights the importance of collaboration between government, academic and industry partners, as well as makes key recommendations on how to encourage adoption of IR4.0 technologies in the short- and long-term. This book bridges the concepts and applications of digitalization and Industry 4.0 and will be a must-read for policy makers seeking to quicken the adoption of its technologies

    Exploração de algoritmos de consenso no Quorum

    Get PDF
    As blockchain technology matures, more industries are becoming interested in evaluating if the technology can answer their needs for decentralized systems that guarantee data immutability and traceability. Quorum is a blockchain platform that accommodates enterprise use-cases by extending Ethereum to support private transactions and a higher transaction throughput. To achieve this, Quorum replaced Ethereum’s proof-of-stake consensus mechanism with proof-of-authority ones, supporting four different algorithms: Raft, Clique, IBFT 1.0, and QBFT. This work explores Quorum’s consensus algorithms and how they affect performance and fault-tolerance, in order to assess the best use cases for each and what should drive their choice. A GoQuorum network was set up, and benchmarks were run against this system under different scenarios while only changing the consensus algorithm for each scenario. Results showed that Raft is the most performant consensus algorithm in Quorum in both private and public transactions. Additionally, QBFT achieved the same performance as IBFT, and Clique was the worst performer across the board, particularly due to having high resource-usage. Regarding fault-tolerance, it was found that bringing validator nodes down at random, when the network has high-availability, had no impact on networks under any of the consensus algorithms.Com blockchain a entrar numa fase de maturidade, cada vez mais indĂșstrias procuram avaliar se esta tecnologia responde Ă s suas necessidades de sistemas distribuĂ­dos que garantam a imutabilidade e rastreabilidade dos seus dados. Quorum Ă© uma plataforma blockchain que procura acomodar os casos de uso destas empresas ao extender Ethereum para suportar transaçÔes privadas e um maior nĂșmero de transaçÔes por segundo. Para esse efeito, o Quorum substituiu o mecanismo de consenso proof-ofstake do Ethereum por um mecanismo de proof-of-authority, onde quatro algoritmos sĂŁo suportados: Raft, Clique, IBFT 1.0, e QBFT. Este trabalho explora os algoritmos de consenso suportados pelo Quorum de modo a determinar como estes afetam o desempenho e tolerĂąncia a falhas das redes, e consequentemente perceber os melhores casos de uso para estes algoritmos e que fatores ter em conta aquando a sua escolha. Foi criada uma rede de GoQuorum, e vĂĄrios testes de desempenho foram corridos contra a rede sob diferentes cenĂĄrios, onde para cada cenĂĄrio a Ășnica variĂĄvel foi o algoritmo de consenso. Os resultados mostraram que o Raft foi o algoritmo de consenso com melhor desempenho, tanto em transaçÔes pĂșblicas como privadas. Adicionalmente, o QBFT e o IBFT atingiram o mesmo desempenho, e o Clique o pior de todos, particularmente pelo seu alto uso de recursos do sistema. Quanto a tolerĂąncia a falhas, foi concluĂ­do que trazer nĂłs validadores abaixo aleatoriamente enquanto o sistema estĂĄ configurado com alta disponibilidade nĂŁo tem impacto nas redes, independentemente do algoritmo de consenso utilizado

    Multilingualism and the Public Sector in South Africa

    Get PDF
    This book contributes to the discourse on language in South Africa with a specific focus on multilingualism and the public sector

    Frivolous Floodgate Fears

    Get PDF
    When rejecting plaintiff-friendly liability standards, courts often cite a fear of opening the floodgates of litigation. Namely, courts point to either a desire to protect the docket of federal courts or a burden on the executive branch. But there is little empirical evidence exploring whether the adoption of a stricter standard can, in fact, decrease the filing of legal claims in this circumstance. This Article empirically analyzes and theoretically models the effect of adopting arguably stricter liability standards on litigation by investigating the context of one of the Supreme Court’s most recent reliances on this argument when adopting a stricter liability standard for causation in employment discrimination claims. In 2013, the Supreme Court held that a plaintiff proving retaliation under Title VII of the Civil Rights Act must prove that their participation in a protected activity was a but-for cause of the adverse employment action they experienced. Rejecting the arguably more plaintiff-friendly motivating-factor standard, the Court stated, “[L]essening the causation standard could also contribute to the filing of frivolous claims, which would siphon resources from efforts by employer[s], administrative agencies, and courts to combat workplace harassment.” Univ. of Tex. Sw. Med. Ctr. v. Nassar, 570 U.S. 338, 358 (2013). And over the past ten years, the Court has overturned the application of motivating-factor causation as applied to at least four different federal antidiscrimination statutes. Contrary to the Supreme Court’s concern that motivating-factor causation encourages frivolous charges, many employment law scholars worry that the heightened but-for standard will deter legitimate claims. This Article empirically explores these concerns, in part using data received from the Equal Employment Opportunity Commission (EEOC) through a Freedom of Information Act (FOIA) request. Specifically, it empirically tests whether the adoption of the but-for causation standard for claims filed under the Age Discrimination in Employment Act and by federal courts of appeals under the Americans with Disabilities Act has impacted the filing of discrimination claims and the outcome of those claims in federal court. Consistent with theory detailed in this Article, the empirical analysis provides evidence that the stricter standard may have increased the docket of the federal courts by decreasing settlement within the EEOC and during litigation. The empirical results weigh in on concerns surrounding the adoption of the but-for causation standard and provide evidence that the floodgates argument, when relied on to deter frivolous filings by changing liability standards, in fact, may do just the opposite by decreasing the likelihood of settlement in the short term, without impacting the filing of claims or other case outcomes

    Handbook Transdisciplinary Learning

    Get PDF
    What is transdisciplinarity - and what are its methods? How does a living lab work? What is the purpose of citizen science, student-organized teaching and cooperative education? This handbook unpacks key terms and concepts to describe the range of transdisciplinary learning in the context of academic education. Transdisciplinary learning turns out to be a comprehensive innovation process in response to the major global challenges such as climate change, urbanization or migration. A reference work for students, lecturers, scientists, and anyone wanting to understand the profound changes in higher education

    Current Challenges in the Application of Algorithms in Multi-institutional Clinical Settings

    Get PDF
    The Coronavirus disease pandemic has highlighted the importance of artificial intelligence in multi-institutional clinical settings. Particularly in situations where the healthcare system is overloaded, and a lot of data is generated, artificial intelligence has great potential to provide automated solutions and to unlock the untapped potential of acquired data. This includes the areas of care, logistics, and diagnosis. For example, automated decision support applications could tremendously help physicians in their daily clinical routine. Especially in radiology and oncology, the exponential growth of imaging data, triggered by a rising number of patients, leads to a permanent overload of the healthcare system, making the use of artificial intelligence inevitable. However, the efficient and advantageous application of artificial intelligence in multi-institutional clinical settings faces several challenges, such as accountability and regulation hurdles, implementation challenges, and fairness considerations. This work focuses on the implementation challenges, which include the following questions: How to ensure well-curated and standardized data, how do algorithms from other domains perform on multi-institutional medical datasets, and how to train more robust and generalizable models? Also, questions of how to interpret results and whether there exist correlations between the performance of the models and the characteristics of the underlying data are part of the work. Therefore, besides presenting a technical solution for manual data annotation and tagging for medical images, a real-world federated learning implementation for image segmentation is introduced. Experiments on a multi-institutional prostate magnetic resonance imaging dataset showcase that models trained by federated learning can achieve similar performance to training on pooled data. Furthermore, Natural Language Processing algorithms with the tasks of semantic textual similarity, text classification, and text summarization are applied to multi-institutional, structured and free-text, oncology reports. The results show that performance gains are achieved by customizing state-of-the-art algorithms to the peculiarities of the medical datasets, such as the occurrence of medications, numbers, or dates. In addition, performance influences are observed depending on the characteristics of the data, such as lexical complexity. The generated results, human baselines, and retrospective human evaluations demonstrate that artificial intelligence algorithms have great potential for use in clinical settings. However, due to the difficulty of processing domain-specific data, there still exists a performance gap between the algorithms and the medical experts. In the future, it is therefore essential to improve the interoperability and standardization of data, as well as to continue working on algorithms to perform well on medical, possibly, domain-shifted data from multiple clinical centers

    Aerial Drone-based System for Wildfire Monitoring and Suppression

    Full text link
    Wildfire, also known as forest fire or bushfire, being an uncontrolled fire crossing an area of combustible vegetation, has become an inherent natural feature of the landscape in many regions of the world. From local to global scales, wildfire has caused substantial social, economic and environmental consequences. Given the hazardous nature of wildfire, developing automated and safe means to monitor and fight the wildfire is of special interest. Unmanned aerial vehicles (UAVs), equipped with appropriate sensors and fire retardants, are available to remotely monitor and fight the area undergoing wildfires, thus helping fire brigades in mitigating the influence of wildfires. This thesis is dedicated to utilizing UAVs to provide automated surveillance, tracking and fire suppression services on an active wildfire event. Considering the requirement of collecting the latest information of a region prone to wildfires, we presented a strategy to deploy the estimated minimum number of UAVs over the target space with nonuniform importance, such that they can persistently monitor the target space to provide a complete area coverage whilst keeping a desired frequency of visits to areas of interest within a predefined time period. Considering the existence of occlusions on partial segments of the sensed wildfire boundary, we processed both contour and flame surface features of wildfires with a proposed numerical algorithm to quickly estimate the occluded wildfire boundary. To provide real-time situational awareness of the propagated wildfire boundary, according to the prior knowledge of the whole wildfire boundary is available or not, we used the principle of vector field to design a model-based guidance law and a model-free guidance law. The former is derived from the radial basis function approximated wildfire boundary while the later is based on the distance between the UAV and the sensed wildfire boundary. Both vector field based guidance laws can drive the UAV to converge to and patrol along the dynamic wildfire boundary. To effectively mitigate the impacts of wildfires, we analyzed the advancement based activeness of the wildfire boundary with a signal prominence based algorithm, and designed a preferential firefighting strategy to guide the UAV to suppress fires along the highly active segments of the wildfire boundary

    Musiktheorie als interdisziplinĂ€res Fach: 8. Kongress der Gesellschaft fĂŒr Musiktheorie Graz 2008

    Get PDF
    Im Oktober 2008 fand an der UniversitĂ€t fĂŒr Musik und darstellende Kunst Graz (KUG) der 8. Kongress der Gesellschaft fĂŒr Musiktheorie (GMTH) zum Thema »Musiktheorie als interdisziplinĂ€res Fach« statt. Die hier vorgelegten gesammelten BeitrĂ€ge akzentuieren Musiktheorie als multiperspektivische wissenschaftliche Disziplin in den Spannungsfeldern Theorie/Praxis, Kunst/Wissenschaft und Historik/Systematik. Die sechs Kapitel ergrĂŒnden dabei die Grenzbereiche zur Musikgeschichte, MusikĂ€sthetik, zur Praxis musikalischer Interpretation, zur kompositorischen Praxis im 20. und 21. Jahrhundert, zur Ethnomusikologie sowie zur Systematischen Musikwissenschaft. Insgesamt 45 AufsĂ€tze, davon 28 in deutscher, 17 in englischer Sprache, sowie die Dokumentation einer Podiumsdiskussion zeichnen in ihrer Gesamtheit einen höchst lebendigen und gegenwartsbezogenen Diskurs, der eine einzigartige Standortbestimmung des Fachs Musiktheorie bietet.The 8th congress of the Gesellschaft fĂŒr Musiktheorie (GMTH) took place in October 2008 at the University for Music and Dramatic Arts Graz (KUG) on the topic »Music Theory and Interdisciplinarity«. The collected contributions characterize music theory as a multi-faceted scholarly discipline at the intersection of theory/practice, art/science and history/system. The six chapters explore commonalties with music history, music aesthetics, musical performance, compositional practice in twentieth- and twenty-first-century music, ethnomusicology and systematic musicology. A total of 45 essays (28 in German, 17 in English) and the documentation of a panel discussion form a vital discourse informed by contemporaneous issues of research in a broad number of fields, providing a unique overview of music theory today. A comprehensive English summary appears at the beginning of all contributions
    • 

    corecore