3,045 research outputs found
Unsupervised learning-based approach for detecting 3D edges in depth maps
3D edge features, which represent the boundaries between different objects or surfaces in a 3D scene, are crucial for many computer vision tasks, including object recognition, tracking, and segmentation. They also have numerous real-world applications in the field of robotics, such as vision-guided grasping and manipulation of objects. To extract these features in the noisy real-world depth data, reliable 3D edge detectors are indispensable. However, currently available 3D edge detection methods are either highly parameterized or require ground truth labelling, which makes them challenging to use for practical applications. To this extent, we present a new 3D edge detection approach using unsupervised classification. Our method learns features from depth maps at three different scales using an encoder-decoder network, from which edge-specific features are extracted. These edge features are then clustered using learning to classify each point as an edge or not. The proposed method has two key benefits. First, it eliminates the need for manual fine-tuning of data-specific hyper-parameters and automatically selects threshold values for edge classification. Second, the method does not require any labelled training data, unlike many state-of-the-art methods that require supervised training with extensive hand-labelled datasets. The proposed method is evaluated on five benchmark datasets with single and multi-object scenes, and compared with four state-of-the-art edge detection methods from the literature. Results demonstrate that the proposed method achieves competitive performance, despite not using any labelled data or relying on hand-tuning of key parameters.</p
Bioactive Self-Assembled Protein Nanosheets for Stem Cell-Based Biotechnologies
Tissue and stem cell culture methods have been dominated by glass and plastic substrates such as Tissue culture plastic. These solid substrates, although widely used, are associated with poor scalability for adherent stem cell expansion in systems such as 3D bioreactors and the design of parallel culture systems. Therefore, investigating strategies to bypass these obstacles in stem cell expansion is essential to enable the wider translation of stem cell technologies. An alternative strategy recently proposed consists in using a liquid surface instead, such as an oil, and associated oil droplets. Indeed, emulsions can be formed using protein nanosheets to stabilise oil/water interfaces to promote the adhesion of stem cells and enable their proliferation. These nanosheets exhibit enhanced interfacial mechanics and allow the introduction of bioactive components via recombinant protein expression to promote bioactivity. Beyond the application of resulting bioemulsions for the expansion of Mesenchymal stem cells, the impact of these bioactive interfaces on the differentiation of iPSCs and the development of cerebral organoids will be presented. The Bovine serum albumin protein was recombinantly modified to attach an N-terminal Avi-Tag, this was expressed and purified from the yeast P. pastoris expression system. The Avi-tag was then biotinylated in vitro by recombinantly expressed BirA. Emulsions of a specific size were formed using the newly biotinylated Bt-BSA protein and functionalized with a cascade of components to mimic cell-cell ligands, this resulted in bioemulsions with a bioactive surface that can interact with surrounding cells. These functionalised droplets were integrated into developing cerebral organoids and their impact on phenotype was studied. The droplets were found not to deform sufficiently to allow mechanical forces to be measured, yet the many of these droplets were retained within the organoids which led to an interesting phenotype within the organoids. The developing rosettes were found to develop enlarged lumens shown by an increase in area, this phenotype did not impact the differentiation into the cerebral lineage depicted by immunohistochemistry of hallmark marker of neuronal differentiation within organoids retaining droplets. The interfacial mechanics of fibrinogen nanosheets treated with varying concentrations of thrombin was studied using interfacial shear rheology. The effect of thrombin significantly altered the interfacial mechanics with the lower concentration of thrombin significantly increasing the toughness multiple folds and decreasing the elasticity of the nanosheets. Additionally, the nanostructure of nanosheets was studied using SEM and TEM and traditional fibrin fibres were found to not form at these interfaces, but local rearrangements and retractions in the thrombin treated nanosheets were observed. Finally, these enhanced mechanical properties promoted the proliferation and expansion of Mesenchymal stem cells on quasi-2D and 3D interfaces
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
GPT models in construction industry: Opportunities, limitations, and a use case validation
Large Language Models (LLMs) trained on large data sets came into prominence in 2018 after Google introduced BERT. Subsequently, different LLMs such as GPT models from OpenAI have been released. These models perform well on diverse tasks and have been gaining widespread applications in fields such as business and education. However, little is known about the opportunities and challenges of using LLMs in the construction industry. Thus, this study aims to assess GPT models in the construction industry. A critical review, expert discussion and case study validation are employed to achieve the study's objectives. The findings revealed opportunities for GPT models throughout the project lifecycle. The challenges of leveraging GPT models are highlighted and a use case prototype is developed for materials selection and optimization. The findings of the study would be of benefit to researchers, practitioners and stakeholders, as it presents research vistas for LLMs in the construction industry
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
A Comprehensive Survey on Applications of Transformers for Deep Learning Tasks
Transformer is a deep neural network that employs a self-attention mechanism
to comprehend the contextual relationships within sequential data. Unlike
conventional neural networks or updated versions of Recurrent Neural Networks
(RNNs) such as Long Short-Term Memory (LSTM), transformer models excel in
handling long dependencies between input sequence elements and enable parallel
processing. As a result, transformer-based models have attracted substantial
interest among researchers in the field of artificial intelligence. This can be
attributed to their immense potential and remarkable achievements, not only in
Natural Language Processing (NLP) tasks but also in a wide range of domains,
including computer vision, audio and speech processing, healthcare, and the
Internet of Things (IoT). Although several survey papers have been published
highlighting the transformer's contributions in specific fields, architectural
differences, or performance evaluations, there is still a significant absence
of a comprehensive survey paper encompassing its major applications across
various domains. Therefore, we undertook the task of filling this gap by
conducting an extensive survey of proposed transformer models from 2017 to
2022. Our survey encompasses the identification of the top five application
domains for transformer-based models, namely: NLP, Computer Vision,
Multi-Modality, Audio and Speech Processing, and Signal Processing. We analyze
the impact of highly influential transformer-based models in these domains and
subsequently classify them based on their respective tasks using a proposed
taxonomy. Our aim is to shed light on the existing potential and future
possibilities of transformers for enthusiastic researchers, thus contributing
to the broader understanding of this groundbreaking technology
Posthuman Creative Styling can a creative writer’s style of writing be described as procedural?
This thesis is about creative styling — the styling a creative writer might use to make their writing
unique. It addresses the question as to whether such styling can be described as procedural. Creative
styling is part of the technique a creative writer uses when writing. It is how they make the text more
‘lively’ by use of tips and tricks they have either learned or discovered. In essence these are rules, ones
the writer accrues over time by their practice. The thesis argues that the use and invention of these
rules can be set as procedures. and so describe creative styling as procedural.
The thesis follows from questioning why it is that machines or algorithms have, so far, been
incapable of producing creative writing which has value. Machine-written novels do not abound on
the bookshelves and writing styled by computers is, on the whole, dull in comparison to human-crafted
literature. It came about by thinking how it would be possible to reach a point where writing by people
and procedural writing are considered to have equal value. For this reason the thesis is set in a
posthuman context, where the differences between machines and people are erased.
The thesis uses practice to inform an original conceptual space model, based on quality dimensions
and dynamic-inter operation of spaces. This model gives an example of the procedures which a
posthuman creative writer uses when engaged in creative styling. It suggests an original formulation
for the conceptual blending of conceptual spaces, based on the casting of qualities from one space to
another. In support of and informing its arguments are ninety-nine examples of creative writing
practice which show the procedures by which style has been applied, created and assessed. It provides
a route forward for further joint research into both computational and human-coded creative writing
Recommended from our members
Understanding the Impact of Covid-19 on Ethnic Minority Students: a Case Study of Open University Level 1 Computing Modules
As reported in [1] ‘Of the disparities that exist within higher education, the gap between the likelihood of White students and students from Black, Asian or minority ethnic backgrounds getting a first- or upper-second-class degree is among the starkest’. In the Open University (OU) for example, a recent research [2] found students from ethnic minorities to be at least 20% less likely to achieve excellent grades and to spend 4-12% more of study time to achieve the same performance as white students. Moreover, with the advent of COVID-19, a growing body of research suggested that students from these groups of the population, suffer disproportionally from the impacts of the pandemic [3], which inevitably impacts on their study experiences. However, recent research in the OU found that some COVID-19 arrangements such as the change of examination mode and change in work-life patterns have impacted students from ethnic minority backgrounds differently. In this paper we present findings from a project aiming to understand the impact of COVID-19 on ethnic minority students’ study experiences and performance. By means of a combination of qualitative and quantitative data analytics we first analysed the study performance and the patterns of progression, then by conducting focus groups with the teaching staff we assessed the impact of COVID-19 on the lived experiences of the students.
[1] Black, Asian and Minority Ethnic Student Attainment at UK Universities (2022). Available at: https://www.universitiesuk.ac.uk.
[2] Nguyen Q., Rienties B. Richardson J.T.E. (2020) Learning analytics to uncover inequality in behavioural engagement and academic attainment in a distance learning setting, Assessment & Evaluation in Higher Education, 45:4, 594-606.
[3] Arday, J. and Jones, C. (2022) “Same storm, different boats: The impact of covid-19 on black students and academic staff in UK and US higher education,” Higher Education. Available at:
https://doi.org/10.1007/s10734-022-00939-0
AI: Limits and Prospects of Artificial Intelligence
The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence
- …