267 research outputs found

    National Culture and Entertainment Center: Iconographic Architecture

    Get PDF
    The building I design will be an icon, a landmark to publicize my hometown, (a now thriving Ho Chi Minh City, formerly called Saigon), to celebrate friendly Vietnamese people and the country of Vietnam in general. The reality of Vietnam and Vietnamese life is quite different now from the image of Vietnam in the past. This new image of the Vietnamese culture will be publicized through a series of projections in spaces, on surfaces, in the transformation into form, or by more conventional displayed photos of different periods of time

    Comparing Java Programs: Syntactic and Contextual Semantic Differences

    Get PDF
    This thesis describes the foundation for developing a tool that compares Java programs, or different versions of a program. The tool captures syntactic differences and contextual semantic differences as well. Syntactic differences are “ordinary” changes in the code. This tool works much in the same way as the Unix tool diff, but it is much smarter than diff. This is because it exploits the fact that programs are built differently than ordinary text. The tool diff’s purpose is to compare text, and it will therefore give imprecise or too verbose results. The tool described in this thesis can identify contextual semantic differences because it knows the contexts of methods, meaning that it knows whether methods are directly declared in the class, inherited from implemented interfaces or if methods override the class’ parent’s method. The approach in this thesis for comparing Java programs is to transform the programs into abstract syntax trees. The transformation from source code to abstract syntax trees are done with the help Strafunski. Strafunski is a software bundle that supports generic programming. The implementation of the tool is done in Haskell. Haskell is a functional programming language. The work of comparing abstract syntax trees can be broken down into the problem of finding the largest common subtree of two abstract syntax trees and further more, the problem of finding the longest common subsequence of two sequences. This thesis describes and presents new algorithms for doing this and it also describe working Haskell code of the implementation of the tool

    Comparing the effectiveness of online and onsite learning in English proficiency classes: Learners’ perspectives

    Get PDF
    Online education has significantly gained popularity due to new technology and more importantly, the growing impact of the digitalization of the economy. Despite its prominent advantages such as accessibility, affordability and flexibility, the effectiveness of online education is still a constant debate and needs extensive investigations in different research contexts. This study aimed to evaluate the effectiveness of online learning in comparison to traditional learning in the context of English language teaching. This descriptive study was undertaken with learners of English as a foreign language (EFL) in English proficiency preparation classes, employing an online questionnaire together with final scores of proficiency tests. The results revealed that the participants had relatively positive perceptions towards online learning in all four aspects: course content, teachers, learning environment and course supports. The significant finding was that when comparing the final results of the VSTEP exams, the online learners generally were able to perform better than the learners in traditional classrooms, though the difference was not largely remarkable. Online education in the new normal will continue to excel and the effectiveness of this learning mode certainly needs further investigation from different perspectives

    Learning to Estimate Critical Gait Parameters from Single-View RGB Videos with Transformer-Based Attention Network

    Full text link
    Musculoskeletal diseases and cognitive impairments in patients lead to difficulties in movement as well as negative effects on their psychological health. Clinical gait analysis, a vital tool for early diagnosis and treatment, traditionally relies on expensive optical motion capture systems. Recent advances in computer vision and deep learning have opened the door to more accessible and cost-effective alternatives. This paper introduces a novel spatio-temporal Transformer network to estimate critical gait parameters from RGB videos captured by a single-view camera. Empirical evaluations on a public dataset of cerebral palsy patients indicate that the proposed framework surpasses current state-of-the-art approaches and show significant improvements in predicting general gait parameters (including Walking Speed, Gait Deviation Index - GDI, and Knee Flexion Angle at Maximum Extension), while utilizing fewer parameters and alleviating the need for manual feature extraction.Comment: Accepted at ISBI 2024 (21st IEEE International Symposium on Biomedical Imaging

    Bandwidth and density for block graphs

    Get PDF
    The bandwidth of a graph G is the minimum of the maximum difference between adjacent labels when the vertices have distinct integer labels. We provide a polynomial algorithm to produce an optimal bandwidth labeling for graphs in a special class of block graphs (graphs in which every block is a clique), namely those where deleting the vertices of degree one produces a path of cliques. The result is best possible in various ways. Furthermore, for two classes of graphs that are ``almost'' caterpillars, the bandwidth problem is NP-complete.Comment: 14 pages, 9 included figures. Note: figures did not appear in original upload; resubmission corrects thi

    OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence for Digital Agriculture

    Full text link
    Recent machine learning approaches have been effective in Artificial Intelligence (AI) applications. They produce robust results with a high level of accuracy. However, most of these techniques do not provide human-understandable explanations for supporting their results and decisions. They usually act as black boxes, and it is not easy to understand how decisions have been made. Explainable Artificial Intelligence (XAI), which has received much interest recently, tries to provide human-understandable explanations for decision-making and trained AI models. For instance, in digital agriculture, related domains often present peculiar or input features with no link to background knowledge. The application of the data mining process on agricultural data leads to results (knowledge), which are difficult to explain. In this paper, we propose a knowledge map model and an ontology design as an XAI framework (OAK4XAI) to deal with this issue. The framework does not only consider the data analysis part of the process, but it takes into account the semantics aspect of the domain knowledge via an ontology and a knowledge map model, provided as modules of the framework. Many ongoing XAI studies aim to provide accurate and verbalizable accounts for how given feature values contribute to model decisions. The proposed approach, however, focuses on providing consistent information and definitions of concepts, algorithms, and values involved in the data mining models. We built an Agriculture Computing Ontology (AgriComO) to explain the knowledge mined in agriculture. AgriComO has a well-designed structure and includes a wide range of concepts and transformations suitable for agriculture and computing domains.Comment: AI-2022 Forty-second SGAI International Conference on Artificial Intelligenc

    Knowledge Representation in Digital Agriculture: A Step Towards Standardised Model

    Full text link
    In recent years, data science has evolved significantly. Data analysis and mining processes become routines in all sectors of the economy where datasets are available. Vast data repositories have been collected, curated, stored, and used for extracting knowledge. And this is becoming commonplace. Subsequently, we extract a large amount of knowledge, either directly from the data or through experts in the given domain. The challenge now is how to exploit all this large amount of knowledge that is previously known for efficient decision-making processes. Until recently, much of the knowledge gained through a number of years of research is stored in static knowledge bases or ontologies, while more diverse and dynamic knowledge acquired from data mining studies is not centrally and consistently managed. In this research, we propose a novel model called ontology-based knowledge map to represent and store the results (knowledge) of data mining in crop farming to build, maintain, and enrich the process of knowledge discovery. The proposed model consists of six main sets: concepts, attributes, relations, transformations, instances, and states. This model is dynamic and facilitates the access, updates, and exploitation of the knowledge at any time. This paper also proposes an architecture for handling this knowledge-based model. The system architecture includes knowledge modelling, extraction, assessment, publishing, and exploitation. This system has been implemented and used in agriculture for crop management and monitoring. It is proven to be very effective and promising for its extension to other domains

    Trade Liberalization and Development in ICT Sector and its impact on household welfare in Viet Nam

    Get PDF
    The ICT sector in Viet Nam had not been developed until the 1980s. However, over the last decade of rapid growth, it has had a powerful impact on many aspects of life in this country. Although the ICT sector is still at an early stage of development and lags behind many other countries in the region, the government of Viet Nam made strong commitments to upgrade the nation’s ICT capability and implemented significant reforms in terms of trade and investment liberalization in ICT sector over the last decade.Trade Liberalization, ICT, Household welfare, Viet Nam

    SMART GEOLOGY FOR FUTURE HANOI: UNDERSTANDING THE ROLE OF GEOLOGY FOR SUSTAINABLE DEVELOPMENT

    Get PDF
    Asia is the second most rapidly urbanising region globally, currently 48% of the Asian population resides in urban areas. Disturbance to the natural environment as a result of urbanisation is expected to be significant since natural resource consumption (e.g. water, energy) and the physical expansion of cities are both outpacing population growth. While cities have been characterised in terms of their economic, social and environmental situation, the role of geology for city resilience and sustainability is under-appreciated. Using Greater Hanoi - a soft-sediment city lying in the Red River Catchment, contrasted with Greater London, a riverine city located in the Thames River Catchment - we illustrate how an understanding of the urban geological environment supported by data informatics, sensor technologies and modelling systems, may be used to underpin urban development and sustainable use of the subsurface. The priority urban challenges identified as part of Hanoi’s Master Planning - including future transport infrastructure, groundwater management & drainage, subsidence, and shallow geothermal energy utilisation are used as case studies to highlight the potential benefits of 3D urban geology approaches and digital data workflows. The benefits of geoscience knowledge-exchange networks across government and private sector partners are also highlighted

    Qsun: an open-source platform towards practical quantum machine learning applications

    Full text link
    Currently, quantum hardware is restrained by noises and qubit numbers. Thus, a quantum virtual machine that simulates operations of a quantum computer on classical computers is a vital tool for developing and testing quantum algorithms before deploying them on real quantum computers. Various variational quantum algorithms have been proposed and tested on quantum virtual machines to surpass the limitations of quantum hardware. Our goal is to exploit further the variational quantum algorithms towards practical applications of quantum machine learning using state-of-the-art quantum computers. This paper first introduces our quantum virtual machine named Qsun, whose operation is underlined by quantum state wave-functions. The platform provides native tools supporting variational quantum algorithms. Especially using the parameter-shift rule, we implement quantum differentiable programming essential for gradient-based optimization. We then report two tests representative of quantum machine learning: quantum linear regression and quantum neural network.Comment: 18 pages, 7 figure
    • …
    corecore