30,482 research outputs found

    Cloud-Based Collaborative 3D Modeling to Train Engineers for the Industry 4.0

    Get PDF
    In the present study, Autodesk Fusion 360 software (which includes the A360 environment) is used to train engineering students for the demands of the industry 4.0. Fusion 360 is a tool that unifies product lifecycle management (PLM) applications and 3D-modeling software (PDLMā€”product design and life management). The main objective of the research is to deepen the studentsā€™ perception of the use of a PDLM application and its dependence on three categorical variables: PLM previous knowledge, individual practices and collaborative engineering perception. Therefore, a collaborative graphic simulation of an engineering project is proposed in the engineering graphics subject at the University of La Laguna with 65 engineering undergraduate students. A scale to measure the perception of the use of PDLM is designed, applied and validated. Subsequently, descriptive analyses, contingency graphical analyses and non-parametric analysis of variance are performed. The results indicate a high overall reception of this type of experience and that it helps them understand how professionals work in collaborative environments. It is concluded that it is possible to respond to the demand of the industry needs in future engineers through training programs of collaborative 3D modeling environments

    Toward a Semiotic Framework for Using Technology in Mathematics Education: The Case of Learning 3D Geometry

    Get PDF
    This paper proposes and examines a semiotic framework to inform the use of technology in mathematics education. Semiotics asserts that all cognition is irreducibly triadic, of the nature of a sign, fallible, and thoroughly immersed in a continuing process of interpretation (Halton, 1992). Mathematical meaning-making or meaningful knowledge construction is a continuing process of interpretation within multiple semiotic resources including typological, topological, and social-actional resources. Based on this semiotic framework, an application named VRMath has been developed to facilitate the learning of 3D geometry. VRMath utilises innovative virtual reality (VR) technology and integrates many semiotic resources to form a virtual reality learning environment (VRLE) as well as a mathematical microworld (Edwards, 1995) for learning 3D geometry. The semiotic framework and VRMath are both now being evaluated and will be re-examined continuously

    The LAB@FUTURE Project - Moving Towards the Future of E-Learning

    Get PDF
    This paper presents Lab@Future, an advanced e-learning platform that uses novel Information and Communication Technologies to support and expand laboratory teaching practices. For this purpose, Lab@Future uses real and computer-generated objects that are interfaced using mechatronic systems, augmented reality, mobile technologies and 3D multi user environments. The main aim is to develop and demonstrate technological support for practical experiments in the following focused subjects namely: Fluid Dynamics - Science subject in Germany, Geometry - Mathematics subject in Austria, History and Environmental Awareness Ć¢ā‚¬ā€œ Arts and Humanities subjects in Greece and Slovenia. In order to pedagogically enhance the design and functional aspects of this e-learning technology, we are investigating the dialogical operationalisation of learning theories so as to leverage our understanding of teaching and learning practices in the targeted context of deployment

    Exploring perceptions and attitudes towards teaching and learning manual technical drawing in a digital age

    Get PDF
    This paper examines the place of manual technical drawing in the 21st century by discussing the perceived value and relevance of teaching school students how to draw using traditional instruments, in a world of computer aided drafting (CAD). Views were obtained through an e-survey, questionnaires and structured interviews. The sample groups represent professional CAD users (e.g. engineers, architects); university lecturers; Technology Education teachers and student teachers; and school students taking Scottish Qualification Authority (SQA) Graphic Communication courses. An analysis of these personal views and attitudes indicates some common values between the various groups canvassed of what instruction in traditional manual technical drafting contributes towards learning. Themes emerge such as problem solving, visualisation, accuracy, co-ordination, use of standard conventions, personal discipline and artistry. In contrast to the assumptions of Prensky's thesis (2001a&b) of digital natives, the study reported in this paper indicate that the school students apparently appreciate the experience of traditional drafting. In conclusion, the paper illustrates the perceived value of such learning in terms of transferable skills, personal achievement and enjoyment

    Semantic data mining and linked data for a recommender system in the AEC industry

    Get PDF
    Even though it can provide design teams with valuable performance insights and enhance decision-making, monitored building data is rarely reused in an effective feedback loop from operation to design. Data mining allows users to obtain such insights from the large datasets generated throughout the building life cycle. Furthermore, semantic web technologies allow to formally represent the built environment and retrieve knowledge in response to domain-specific requirements. Both approaches have independently established themselves as powerful aids in decision-making. Combining them can enrich data mining processes with domain knowledge and facilitate knowledge discovery, representation and reuse. In this article, we look into the available data mining techniques and investigate to what extent they can be fused with semantic web technologies to provide recommendations to the end user in performance-oriented design. We demonstrate an initial implementation of a linked data-based system for generation of recommendations

    Using Augmented Reality as a Medium to Assist Teaching in Higher Education

    Get PDF
    In this paper we describe the use of a high-level augmented reality (AR) interface for the construction of collaborative educational applications that can be used in practice to enhance current teaching methods. A combination of multimedia information including spatial three-dimensional models, images, textual information, video, animations and sound, can be superimposed in a student-friendly manner into the learning environment. In several case studies different learning scenarios have been carefully designed based on human-computer interaction principles so that meaningful virtual information is presented in an interactive and compelling way. Collaboration between the participants is achieved through use of a tangible AR interface that uses marker cards as well as an immersive AR environment which is based on software user interfaces (UIs) and hardware devices. The interactive AR interface has been piloted in the classroom at two UK universities in departments of Informatics and Information Science

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy

    Mining Unclassified Traffic Using Automatic Clustering Techniques

    Get PDF
    In this paper we present a fully unsupervised algorithm to identify classes of traffic inside an aggregate. The algorithm leverages on the K-means clustering algorithm, augmented with a mechanism to automatically determine the number of traffic clusters. The signatures used for clustering are statistical representations of the application layer protocols. The proposed technique is extensively tested considering UDP traffic traces collected from operative networks. Performance tests show that it can clusterize the traffic in few tens of pure clusters, achieving an accuracy above 95%. Results are promising and suggest that the proposed approach might effectively be used for automatic traffic monitoring, e.g., to identify the birth of new applications and protocols, or the presence of anomalous or unexpected traffi

    Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis

    Full text link
    Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmic-statistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worst-case algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large-scale databases and that also have good inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles of Database Systems (PODS 2012
    • ā€¦
    corecore