15,967 research outputs found
How are cloud-based platforms changing cultural services: Towards a new service integration model
As a new strategy for public digital cultural services in China, cloud-based cultural platforms are developing rapidly and becoming increasingly important. Nevertheless, there is also a great gap between their operation status quo with expectations stated in various policy documents, especially in the aspects of resource fusion across cultural institutions, on-line and off-line interaction and smart services. This research builds a new theoretical service integration model for the application of cloud platforms in cultural services, and explores interactions between cloud-based cultural platforms, physical cultural venues and smart individual spaces
Angels in the architecture - and devils in the detail: how the learning space impacts on teaching and learning
An innovative classroom design pioneered in the US, which aims to facilitate greater student engagement, has been piloted in a UK University for the first time. This case study reflects on some of the advantages and the challenges of this technology rich learning space and considers its impact on curriculum design in a module which aims to develop academic, research and digital skills in first year students on an undergraduate Health and Social Care course
Complete LibTech 2013 Print Program
PDF of the complete print program from the 2013 Library Technology Conferenc
Using Cloudworks to Support OER Activities
This report forms the third and final output of the Pearls in the Clouds project, funded by the Higher Education Academy. It focuses on evaluation of the use of a social networking site, Cloudworks, to support evidence-based practice.
The aim of this project (Pearls in the Clouds) has been to evaluate the ways in which web 2.0 tools like Cloudworks can support evidence-informed practices in relation to learning and teaching. We have reviewed evidence from empirically grounded studies surrounding the uses of web2.0 in higher education and highlighted the gap between using web2.0 to support learning and teaching, and using it to support learning about learning and teaching (in an evidence-informed way) (Conole and Alevizou, 2010). We have reported on findings from a case study focusing on the use of Cloudworks by a community of practice - educational technologists - reflecting upon, and, negotiating their role in enhancing teaching and learning in higher education (Galley et al., 2010). The object of this study is to explore and evaluate the use of the site by individuals and communities involved in the production of, and research on, the development, delivery and use of Open Educational Resources (OER)
Revealing the Vicious Circle of Disengaged User Acceptance: A SaaS Provider's Perspective
User acceptance tests (UAT) are an integral part of many different software engineering methodologies. In this paper, we examine the influence of UATs on the relationship between users and Software-as-a-Service (SaaS) applications, which are continuously delivered rather than rolled out during a one-off signoff process. Based on an exploratory qualitative field study at a multinational SaaS provider in Denmark, we show that UATs often address the wrong problem in that positive user acceptance may actually indicate a negative user experience. Hence, SaaS providers should be careful not to rest on what we term disengaged user acceptance. Instead, we outline an approach that purposefully queries users for ambivalent emotions that evoke constructive criticism, in order to facilitate a discourse that favors the continuous innovation of a SaaS system. We discuss theoretical and practical implications of our approach for the study of user engagement in testing SaaS applications
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Automated machine learning (AutoML) seeks to build ML models with minimal
human effort. While considerable research has been conducted in the area of
AutoML in general, aiming to take humans out of the loop when building
artificial intelligence (AI) applications, scant literature has focused on how
AutoML works well in open-environment scenarios such as the process of training
and updating large models, industrial supply chains or the industrial
metaverse, where people often face open-loop problems during the search
process: they must continuously collect data, update data and models, satisfy
the requirements of the development and deployment environment, support massive
devices, modify evaluation metrics, etc. Addressing the open-environment issue
with pure data-driven approaches requires considerable data, computing
resources, and effort from dedicated data engineers, making current AutoML
systems and platforms inefficient and computationally intractable.
Human-computer interaction is a practical and feasible way to tackle the
problem of open-environment AI. In this paper, we introduce OmniForce, a
human-centered AutoML (HAML) system that yields both human-assisted ML and
ML-assisted human techniques, to put an AutoML system into practice and build
adaptive AI in open-environment scenarios. Specifically, we present OmniForce
in terms of ML version management; pipeline-driven development and deployment
collaborations; a flexible search strategy framework; and widely provisioned
and crowdsourced application algorithms, including large models. Furthermore,
the (large) models constructed by OmniForce can be automatically turned into
remote services in a few minutes; this process is dubbed model as a service
(MaaS). Experimental results obtained in multiple search spaces and real-world
use cases demonstrate the efficacy and efficiency of OmniForce
DualStream: Spatially Sharing Selves and Surroundings using Mobile Devices and Augmented Reality
In-person human interaction relies on our spatial perception of each other
and our surroundings. Current remote communication tools partially address each
of these aspects. Video calls convey real user representations but without
spatial interactions. Augmented and Virtual Reality (AR/VR) experiences are
immersive and spatial but often use virtual environments and characters instead
of real-life representations. Bridging these gaps, we introduce DualStream, a
system for synchronous mobile AR remote communication that captures, streams,
and displays spatial representations of users and their surroundings.
DualStream supports transitions between user and environment representations
with different levels of visuospatial fidelity, as well as the creation of
persistent shared spaces using environment snapshots. We demonstrate how
DualStream can enable spatial communication in real-world contexts, and support
the creation of blended spaces for collaboration. A formative evaluation of
DualStream revealed that users valued the ability to interact spatially and
move between representations, and could see DualStream fitting into their own
remote communication practices in the near future. Drawing from these findings,
we discuss new opportunities for designing more widely accessible spatial
communication tools, centered around the mobile phone.Comment: 10 pages, 4 figures, 1 table; To appear in the proceedings of the
IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 202
- …