10,667 research outputs found
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is
demonstrated to be one small step for generative AI (GAI), but one giant leap
for artificial general intelligence (AGI). Since its official release in
November 2022, ChatGPT has quickly attracted numerous users with extensive
media coverage. Such unprecedented attention has also motivated numerous
researchers to investigate ChatGPT from various aspects. According to Google
scholar, there are more than 500 articles with ChatGPT in their titles or
mentioning it in their abstracts. Considering this, a review is urgently
needed, and our work fills this gap. Overall, this work is the first to survey
ChatGPT with a comprehensive review of its underlying technology, applications,
and challenges. Moreover, we present an outlook on how ChatGPT might evolve to
realize general-purpose AIGC (a.k.a. AI-generated content), which will be a
significant milestone for the development of AGI.Comment: A Survey on ChatGPT and GPT-4, 29 pages. Feedback is appreciated
([email protected]
A Proposed Meta-Reality Immersive Development Pipeline: Generative AI Models and Extended Reality (XR) Content for the Metaverse
The realization of an interoperable and scalable virtual platform, currently known as the “metaverse,” is inevitable, but many technological challenges need to be overcome first. With the metaverse still in a nascent phase, research currently indicates that building a new 3D social environment capable of interoperable avatars and digital transactions will represent most of the initial investment in time and capital. The return on investment, however, is worth the financial risk for firms like Meta, Google, and Apple. While the current virtual space of the metaverse is worth 84.09 billion by the end of 2028. But the creation of an entire alternate virtual universe of 3D avatars, objects, and otherworldly cityscapes calls for a new development pipeline and workflow. Existing 3D modeling and digital twin processes, already well-established in industry and gaming, will be ported to support the need to architect and furnish this new digital world. The current development pipeline, however, is cumbersome, expensive and limited in output capacity. This paper proposes a new and innovative immersive development pipeline leveraging the recent advances in artificial intelligence (AI) for 3D model creation and optimization. The previous reliance on 3D modeling software to create assets and then import into a game engine can be replaced with nearly instantaneous content creation with AI. While AI art generators like DALL-E 2 and DeepAI have been used for 2D asset creation, when combined with game engine technology, such as Unreal Engine 5 and virtualized geometry systems like Nanite, a new process for creating nearly unlimited content for immersive reality is possible. New processes and workflows, such as those proposed here, will revolutionize content creation and pave the way for Web 3.0, the metaverse and a truly 3D social environment
A qualitative study about first year students’ experiences of transitioning to higher education and available academic support resources
Successfully transitioning students to higher education is a complex problem that challenges institutions internationally. Unsuccessful transitions have wide ranging implications that include both social and financial impacts for students and the universities. There appears to be a paucity in the literature that represents student perspectives on their transition experiences. This research study aimed to do two things: first to better understand the transition experience and use of academic support services from the student perspective and second to provide strategies for facilitating a more effective transition experience based on student discussions.
This research explores the experiences of primarily non-traditional students at one institution in Australia. Data collection involved two phases using a yarning circle approach. The first involved participants in small unstructured yarning circles where they were given the opportunity to speak freely about their transition experience and their use of academic support services. This was then followed by a larger yarning circle that was semi-structured to explore some of the themes from the small yarning circles more fully. The yarning circle data was analysed using Braun and Clarke’s (2006) six-steps of thematic analysis.
The analysis indicated that participants felt that the available academic support services did not meet their needs. It also provided insight into how the students approach higher education and what they are seeking from their institution by means of support. One major finding that has the potential to impact transition programs around the world is that older non-traditional students appear to approach higher education as they would a new job. This shifts the lens away from the traditional transition program of social integration to one that uses workplace induction strategies as a form of integration. The recommendations from this study also include recognising and accepting the emotions associated with transitioning to higher education, reworking the transition strategies for non-traditional students and facilitating opportunities for engagement as opposed to providing them directly
Recommended from our members
Antecedents of business intelligence system use
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.Organisational reliance on information has become vital for organisational competitiveness. With increasing data volumes, Business Intelligence (BI) becomes a cornerstone of the decision-support system. However, employee resistance to use Business Intelligence Systems (BIS) is evident. This creates a problem to organisations in realising the benefits of BIS. It is thus important to study the enablers of sustained use of BIS amongst employees.
This thesis identifies existing theories that can be used to study BI system use. It integrates and extends technology use theories through a framework focusing on Business Intelligence System Use (BISU). Empirical research is then conducted in Kuwait’s telecom and banking industries through a close-ended, self-administered questionnaire using a five-point Likert scale. Responses were received from 211 BI users. The data was analysed using SmartPLS to study the convergent and discriminant validity and reliability. Partial least squares structural equation modelling (PLS-SEM) was used to study the direct and indirect relationships between constructs and answer the hypotheses. In addition to SmartPLS, SPSS was used for descriptive analysis.
The results indicated that UTAUT factors consisting of performance expectancy, effort expectancy and social influence positively impact BI system use. Voluntariness of use was found to positively moderate the relationship between social influence and BI system use. Furthermore, BI system quality positively impacts both performance expectancy and effort expectancy. The BI user’s self-efficacy also positively impacts effort expectancy. In addition, social influence was found to be positively influenced by organisational factors, namely top management support and information culture.
The findings of this research contribute to literature by determining and quantifying the factors that influence BISU through the lens of employee perspectives. This thesis also explains how employees’ object-based beliefs about BI affect their behavioural beliefs, which in turn impact BISU. Limitations of this research include the omission of UTAUT’s facilitating conditions and the limited variance of respondent demographics
Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos
Data charts are widely used in our daily lives, being present in regular media,
such as newspapers, magazines, web pages, books, and many others. A well constructed
data chart leads to an intuitive understanding of its underlying data
and in the same way, when data charts have wrong design choices, a redesign
of these representations might be needed. However, in most cases, these
charts are shown as a static image, which means that the original data are not
usually available. Therefore, automatic methods could be applied to extract the
underlying data from the chart images to allow these changes. The task of
recognizing charts and extracting data from them is complex, largely due to the
variety of chart types and their visual characteristics.
Computer Vision techniques for image classification and object detection are
widely used for the problem of recognizing charts, but only in images without
any disturbance. Other features in real-world images that can make this task
difficult are not present in most literature works, like photo distortions, noise,
alignment, etc. Two computer vision techniques that can assist this task and
have been little explored in this context are perspective detection and
correction. These methods transform a distorted and noisy chart in a clear
chart, with its type ready for data extraction or other uses. The task of
reconstructing data is straightforward, as long the data is available the
visualization can be reconstructed, but the scenario of reconstructing it on the
same context is complex.
Using a Visualization Grammar for this scenario is a key component, as these
grammars usually have extensions for interaction, chart layers, and multiple
views without requiring extra development effort.
This work presents a model for automated support for custom recognition, and
reconstruction of charts in images. The model automatically performs the
process steps, such as reverse engineering, turning a static chart back into its
data table for later reconstruction, while allowing the user to make modifications
in case of uncertainties. This work also features a model-based architecture
along with prototypes for various use cases. Validation is performed step by
step, with methods inspired by the literature. This work features three use
cases providing proof of concept and validation of the model.
The first use case features usage of chart recognition methods focused on
documents in the real-world, the second use case focus on vocalization of
charts, using a visualization grammar to reconstruct a chart in audio format,
and the third use case presents an Augmented Reality application that
recognizes and reconstructs charts in the same context (a piece of paper)
overlaying the new chart and interaction widgets. The results showed that with
slight changes, chart recognition and reconstruction methods are now ready for
real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando
presentes nos meios de comunicação regulares, tais como jornais, revistas,
páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma
compreensão intuitiva dos seus dados inerentes e da mesma forma, quando
os gráficos de dados têm escolhas de conceção erradas, poderá ser
necessário um redesenho destas representações. Contudo, na maioria dos
casos, estes gráficos são mostrados como uma imagem estática, o que
significa que os dados originais não estão normalmente disponíveis. Portanto,
poderiam ser aplicados métodos automáticos para extrair os dados inerentes
das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de
reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande
parte devido à variedade de tipos de gráficos e às suas características visuais.
As técnicas de Visão Computacional para classificação de imagens e deteção
de objetos são amplamente utilizadas para o problema de reconhecimento de
gráficos, mas apenas em imagens sem qualquer ruído. Outras características
das imagens do mundo real que podem dificultar esta tarefa não estão
presentes na maioria das obras literárias, como distorções fotográficas, ruído,
alinhamento, etc. Duas técnicas de visão computacional que podem ajudar
nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e
correção da perspetiva. Estes métodos transformam um gráfico distorcido e
ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados
ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que
os dados estejam disponíveis a visualização pode ser reconstruída, mas o
cenário de reconstrução no mesmo contexto é complexo.
A utilização de uma Gramática de Visualização para este cenário é um
componente chave, uma vez que estas gramáticas têm normalmente
extensões para interação, camadas de gráficos, e visões múltiplas sem exigir
um esforço extra de desenvolvimento.
Este trabalho apresenta um modelo de suporte automatizado para o
reconhecimento personalizado, e reconstrução de gráficos em imagens
estáticas. O modelo executa automaticamente as etapas do processo, tais
como engenharia inversa, transformando um gráfico estático novamente na
sua tabela de dados para posterior reconstrução, ao mesmo tempo que
permite ao utilizador fazer modificações em caso de incertezas. Este trabalho
também apresenta uma arquitetura baseada em modelos, juntamente com
protótipos para vários casos de utilização. A validação é efetuada passo a
passo, com métodos inspirados na literatura. Este trabalho apresenta três
casos de uso, fornecendo prova de conceito e validação do modelo.
O primeiro caso de uso apresenta a utilização de métodos de reconhecimento
de gráficos focando em documentos no mundo real, o segundo caso de uso
centra-se na vocalização de gráficos, utilizando uma gramática de visualização
para reconstruir um gráfico em formato áudio, e o terceiro caso de uso
apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói
gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos
gráficos e widgets de interação. Os resultados mostraram que com pequenas
alterações, os métodos de reconhecimento e reconstrução dos gráficos estão
agora prontos para os gráficos do mundo real, tendo em consideração o
tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic
A Comparative Study on Students’ Learning Expectations of Entrepreneurship Education in the UK and China
Entrepreneurship education has become a critical subject in academic research and educational policy design, occupying a central role in contemporary education globally. However, a review of the literature indicates that research on entrepreneurship
education is still in a relatively early stage. Little is known about how entrepreneurship education learning is affected by the environmental context to date. Therefore, combining the institutional context and focusing on students’ learning expectations as
a novel perspective, the main aim of the thesis is to address the knowledge gap by developing an original conceptual framework to advance understanding of the dynamic learning process of entrepreneurship education through the lens of self-determination theory, thereby providing a basis for advancing understanding of entrepreneurship education.
The author adopted an epistemological positivism philosophy and a deductive approach. This study gathered 247 valid questionnaires from the UK (84) and China (163). It requested students to recall their learning expectations before attending their entrepreneurship courses and to assess their perceptions of learning outcomes after taking the entrepreneurship courses. It was found that entrepreneurship education policy is an antecedent that influences students' learning expectations, which is
represented in the difference in student autonomy. British students in active learning under a voluntary education policy have higher autonomy than Chinese students in passive learning under a compulsory education policy, thus having higher learning
expectations, leading to higher satisfaction. The positive relationship between autonomy and learning expectations is established, which adds a new dimension to self-determination theory. Furthermore, it is also revealed that the change in students’ entrepreneurial intentions before and after their entrepreneurship courses is explained by understanding the process of a business start-up (positive), hands-on business start-up opportunities (positive), students’ actual input (positive) and tutors’ academic qualification (negative).
The thesis makes contributions to both theory and practice. The findings have far reaching implications for different parties, including policymakers, educators, practitioners and researchers. Understanding and shaping students' learning expectations is a critical first step in optimising entrepreneurship education teaching and learning. On the one hand, understanding students' learning expectations of entrepreneurship and entrepreneurship education can help the government with educational interventions and policy reform, as well as improving the quality and delivery of university-based entrepreneurship education. On the other hand, entrepreneurship education can assist students in establishing correct and realistic learning expectations and entrepreneurial conceptions, which will benefit their future entrepreneurial activities and/or employment. An important implication is that this study connects multiple stakeholders by bridging the national-level institutional context, organisational-level university entrepreneurship education, and individual level entrepreneurial learning to promote student autonomy based on an understanding of students' learning expectations. This can help develop graduates with their ability for autonomous learning and autonomous entrepreneurial behaviour.
The results of this study help to remind students that it is them, the learners, their expectations and input that can make the difference between the success or failure of their study. This would not only apply to entrepreneurship education but also to
other fields of study. One key message from this study is that education can be encouraged and supported but cannot be “forced”. Mandatory entrepreneurship education is not a quick fix for the lack of university students’ innovation and
entrepreneurship. More resources must be invested in enhancing the enterprise culture, thus making entrepreneurship education desirable for students
A Case Study Examining Japanese University Students' Digital Literacy and Perceptions of Digital Tools for Academic English learning
Current Japanese youth are constantly connected to the Internet and using digital devices, but predominantly for social media and entertainment. According to literature on the Japanese digital native, tertiary students do not—and cannot—use technology with any reasonable fluency, but the likely reasons are rarely addressed. To fill the gap in the literature, this study, by employing a case study methodology, explores students’ experience with technology for English learning through the introduction of digital tools. First-year Japanese university students in an Academic English Program (AEP) were introduced to a variety of easily available digital tools. The instruction was administered online, and each tool was accompanied by a task directly related to classwork. Both quantitative and qualitative data were collected in the form of a pre-course Computer Literacy Survey, a post-course open-ended Reflection Activity survey, and interviews. The qualitative data was reviewed drawing on the Technology Acceptance Model (TAM) and its educational variants as an analytical framework. Educational, social, and cultural factors were also examined to help identify underlying factors that would influence students’ perceptions. The results suggest that the subjects’ lack of awareness of, and experience with, the use of technology for learning are the fundamental causes of their perceptions of initial difficulty. Based on these findings, this study proposes a possible technology integration model that enhances digital literacy for more effective language learning in the context of Japanese education
Organizations decentered: data objects, technology and knowledge
Data are no longer simply a component of administrative and managerial work but a pervasive resource and medium through which organizations come to know and act upon the contingencies they confront. We theorize how the ongoing technological developments reinforce the traditional functions of data as instruments of management and control but also reframe and extend their role. By rendering data as technical entities, digital technologies transform the process of knowing and the knowledge functions data fulfil in socioeconomic life. These functions are most of the times mediated by putting together disperse and steadily updatable data in more stable entities we refer to as data objects. Users, customers, products, and physical machines rendered as data objects become the technical and cognitive means through which organizational knowledge, patterns, and practices develop. Such conditions loosen the dependence of data from domain knowledge, reorder the relative significance of internal versus external references in organizations, and contribute to a paradigmatic contemporary development that we identify with the decentering of organizations of which digital platforms are an important specimen
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
- …