6,059 research outputs found
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
We propose Conditional Adapter (CoDA), a parameter-efficient transfer
learning method that also improves inference efficiency. CoDA generalizes
beyond standard adapter approaches to enable a new way of balancing speed and
accuracy using conditional computation. Starting with an existing dense
pretrained model, CoDA adds sparse activation together with a small number of
new parameters and a light-weight training phase. Our experiments demonstrate
that the CoDA approach provides an unexpectedly efficient way to transfer
knowledge. Across a variety of language, vision, and speech tasks, CoDA
achieves a 2x to 8x inference speed-up compared to the state-of-the-art Adapter
approach with moderate to no accuracy loss and the same parameter efficiency
Machine Learning Research Trends in Africa: A 30 Years Overview with Bibliometric Analysis Review
In this paper, a critical bibliometric analysis study is conducted, coupled
with an extensive literature survey on recent developments and associated
applications in machine learning research with a perspective on Africa. The
presented bibliometric analysis study consists of 2761 machine learning-related
documents, of which 98% were articles with at least 482 citations published in
903 journals during the past 30 years. Furthermore, the collated documents were
retrieved from the Science Citation Index EXPANDED, comprising research
publications from 54 African countries between 1993 and 2021. The bibliometric
study shows the visualization of the current landscape and future trends in
machine learning research and its application to facilitate future
collaborative research and knowledge exchange among authors from different
research institutions scattered across the African continent
Decoding spatial location of attended audio-visual stimulus with EEG and fNIRS
When analyzing complex scenes, humans often focus their attention on an object at a particular spatial location in the presence of background noises and irrelevant visual objects. The ability to decode the attended spatial location would facilitate brain computer interfaces (BCI) for complex scene analysis. Here, we tested two different neuroimaging technologies and investigated their capability to decode audio-visual spatial attention in the presence of competing stimuli from multiple locations. For functional near-infrared spectroscopy (fNIRS), we targeted dorsal frontoparietal network including frontal eye field (FEF) and intra-parietal sulcus (IPS) as well as superior temporal gyrus/planum temporal (STG/PT). They all were shown in previous functional magnetic resonance imaging (fMRI) studies to be activated by auditory, visual, or audio-visual spatial tasks. We found that fNIRS provides robust decoding of attended spatial locations for most participants and correlates with behavioral performance. Moreover, we found that FEF makes a large contribution to decoding performance. Surprisingly, the performance was significantly above chance level 1s after cue onset, which is well before the peak of the fNIRS response.
For electroencephalography (EEG), while there are several successful EEG-based algorithms, to date, all of them focused exclusively on auditory modality where eye-related artifacts are minimized or controlled. Successful integration into a more ecological typical usage requires careful consideration for eye-related artifacts which are inevitable. We showed that fast and reliable decoding can be done with or without ocular-removal algorithm. Our results show that EEG and fNIRS are promising platforms for compact, wearable technologies that could be applied to decode attended spatial location and reveal contributions of specific brain regions during complex scene analysis
Recommended from our members
Co-design As Healing: Exploring The Experiences Of Participants Facing Mental Health Problems
This thesis is an exploration of the healing role of co-design in mental health. Although co-design projects conducted within mental health settings are rising, existing literature tends to focus on the object of design and its outcomes while the experiences of participants per se remain largely unexplored. The guiding research question of this study is not how we design things that improve mental health, but how co-designing, as an act, might do so.
The thesis presents two projects that were organized in collaboration with the mental health charity Islington Mind and the Psychosis Therapy Project (PTP) in London.
The project at Islington Mind used a structured design process inviting participants to design for wellbeing. A case study analysis provides insights on how participants were impacted, summarizing key challenges and opportunities.
The design at PTP worked towards creating a collective brief in an emergent fashion, finally culminating in a board game. The experiences of participants were explored through Interpretative Phenomenological Analysis (IPA), using semi-structured interview data. The analysis served to identify key themes characterising the experience of co-design such as contributing, connecting, thinking and intentioning. In addition, a mixed-methods analysis of questionnaires and interview data exploring participants' wellbeing, showed that all participants who engaged fairly consistently in the project improved after the project ended, although some participants' scores returned to baseline six months later.
Reflecting on both projects, an approach to facilitation within mental health is outlined, detailing how the dimensions of weaving and layered participation, nurturing mattering and facilitating attitudes interlace. This contribution raises awareness of tacit dimensions in the practice of facilitation, articulating the nuances of how to encourage and sustain meaningful and ethical engagement and offering insights into a range of tools. It highlights the importance of remaining reflexive in relation to attitudes and emotions and discusses practical methodological and ethical challenges and ways to resolve them which can be of benefit to researchers embarking on a similar journey.
The thesis also offers detailed insights on how methodologies from different fields were integrated into a whole, arguing for transparency and reflexivity about epistemological assumptions, and how underlying paradigms shift in an interdisciplinary context.
Based on the overall findings, the thesis makes a case for considering design as healing (or a designerly way of healing), highlighting implications at a systems, social and individual level. It makes an original contribution to our understanding of design, highlighting its healing character, and proposes a new way to support mental health. The participants in this study not only had increased their own wellbeing through co-designing, but were also empowered and contributed towards healing the world. Hence, the thesis argues for a unique, holistic perspective of design and mental health, recognizing the interconnectedness of the individual, social and systemic dimensions of the healing processes that are ignited
Omics measures of ageing and disease susceptibility
While genomics has been a major field of study for decades due to relatively inexpensive genotyping arrays, the recent advancement of technology has also allowed the measure and study of various “omics”. There are now numerous methods and platforms available that allow high throughput and high dimensional quantification of many types of biological molecules. Traditional genomics and transcriptomics are now joined by proteomics, metabolomics, glycomics, lipidomics and epigenomics.
I was lucky to have access to a unique resource in the Orkney Complex Disease Study (ORCADES), a cohort of individuals from the Orkney Islands that are extremely deeply annotated. Approximately 1000 individuals in ORCADES have genomics, proteomics, lipidomics, glycomics, metabolomics, epigenomics, clinical risk factors and disease phenotypes, as well as body composition measurements from whole body scans. In addition to these cross-sectional omics and health related measures, these individuals also have linked electronic health records (EHR) available, allowing the assessment of the effect of these omics measures on incident disease over a ~10-year follow up period. In this thesis I use this phenotype rich resource to investigate the relationship between multiple types of omics measures and both ageing and health outcomes.
First, I used the ORCADES data to construct measures of biological age (BA). The idea that there is an underlying rate at which the body deteriorates with age that varies between individuals of the same chronological age, this biological age, would be more indicative of health status, functional capacity and risk of age-related diseases than chronological age. Previous models estimating BA (ageing clocks) have predominantly been built using a single type of omics assay and comparison between different omics ageing clocks has been limited. I performed the most exhaustive comparison of different omics ageing clocks yet, with eleven clocks spanning nine different omics assays. I show that different omics clocks overlap in the information they provide about age, that some omics clocks track more generalised ageing while others track specific disease risk factors and that omics ageing clocks are prognostic of incident disease over and above chronological age.
Second, I assessed whether individually or in multivariable models, omics measures are associated with health-related risk factors or prognostic of incident disease over 10 years post-assessment. I show that 2,686 single omics biomarkers are associated with 10 risk factors and 44 subsequent incident diseases. I also show that models built using multiple biomarkers from whole body scans, metabolomics, proteomics and clinical risk factors are prognostic of subsequent diabetes mellitus and that clinical risk factors are prognostic of incident hypertensive disorders, obesity, ischaemic heart disease and Framingham risk score.
Third, I investigated the genetic architecture of a subset of the proteomics measures available in ORCADES, specifically 184 cardiovascular-related proteins. Combining genome-wide association (GWAS) summary statistics from ORCADES and 17 other cohorts from the SCALLOP Consortium, giving a maximum sample size of 26,494 individuals, I performed 184 genome-wide association meta-analyses (GWAMAs) on the levels of these proteins circulating in plasma. I discovered 592 independent significant loci associated with the levels of at least one protein. I found that between 8-37% of these significant loci colocalise with known expression quantitative trait loci (eQTL). I also find evidence of causal associations between 11 plasma protein levels and disease susceptibility using Mendelian randomisation, highlighting potential candidate drug targets
How to Be a God
When it comes to questions concerning the nature of Reality, Philosophers and Theologians have the answers.
Philosophers have the answers that can’t be proven right. Theologians have the answers that can’t be proven wrong.
Today’s designers of Massively-Multiplayer Online Role-Playing Games create realities for a living. They can’t spend centuries mulling over the issues: they have to face them head-on. Their practical experiences can indicate which theoretical proposals actually work in practice.
That’s today’s designers. Tomorrow’s will have a whole new set of questions to answer.
The designers of virtual worlds are the literal gods of those realities. Suppose Artificial Intelligence comes through and allows us to create non-player characters as smart as us. What are our responsibilities as gods? How should we, as gods, conduct ourselves?
How should we be gods
Chinese Benteng Women’s Participation in Local Development Affairs in Indonesia: Appropriate means for struggle and a pathway to claim citizen’ right?
It had been more than two decades passing by aftermath the devastating Asia’s Financial Crisis in 1997, subsequently followed by Suharto’s step down from his presidential throne which he occupied for more than three decades. The financial turmoil turned to a political disaster furthermore has led to massive looting that severely impacted Indonesians of Chinese descendant, including unresolved mystery of the most atrocious sexual violation against women and covert killings of students and democracy activists in this country. Since then, precisely aftermath May 1998, which publicly known as “Reformasi”1, Indonesia underwent political reform that eventually corresponded positively to its macroeconomic growth. Twenty years later, in 2018, Indonesia captured worldwide attention because it has successfully hosted two internationally renowned events, namely the Asian Games 2018 – the most prestigious sport events in Asia – conducted in Jakarta and Palembang; and the IMF/World Bank Annual Meeting 2018 in Bali. Particularly in the IMF/World Bank Annual Meeting, this event has significantly elevated Indonesia’s credibility and international prestige in the global economic powerplay as one of the nations with promising growth and openness. However, the narrative about poverty and inequality, including increasing racial tension, religious conservatism, and sexual violation against women are superseded by friendly climate for foreign investment and eventually excessive glorification of the nation’s economic growth. By portraying the image of promising new economic power, as rhetorically promised by President Joko Widodo during his presidential terms, Indonesia has swept the growing inequality in this highly stratified society that historically compounded with religious and racial tension under the carpet of digital economy.Arte y Humanidade
Graphical scaffolding for the learning of data wrangling APIs
In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
The Neural Mechanisms of Value Construction
Research in decision neuroscience has characterized how the brain makes decisions by assessing the expected utility of each option in an abstract value space that affords the ability to compare dissimilar options. Experiments at multiple levels of analysis in multiple species have localized the ventromedial prefrontal cortex (vmPFC) and nearby orbitofrontal cortex (OFC) as the main nexus where this abstract value space is represented. However, much less is known about how this value code is constructed by the brain in the first place. By using a combination of behavioral modeling and cutting edge tools to analyze functional magnetic resonance imaging (fMRI) data, the work of this thesis proposes that the brain decomposes stimuli into their constituent attributes and integrates across them to construct value. These stimulus features embody appetitive or aversive properties that are either learned from experience or evaluated online by comparing them to previously experienced stimuli with similar features. Stimulus features are processed by cortical areas specialized for the perception of a particular stimulus type and then integrated into a value signal in vmPFC/OFC.
The project presented in Chapter 2 examines how food items are evaluated by their constituent attributes, namely their nutrient makeup. A linear attribute integration model succinctly captures how subjective values can be computed from a weighted combination of the constituent nutritive attributes of the food. Multivariate analysis methods revealed that these nutrient attributes are represented in the lateral OFC, while food value is encoded both in medial and lateral OFC. Connectivity between lateral and medial OFC allows this nutrient attribute information to be integrated into a value representation in medial OFC.
In Chapter 3, I show that this value construction process can operate over higher-level abstractions when the context requires bundles of items to be valued, rather than isolated items. When valuing bundles of items, the constituent items themselves become the features, and their values are integrated with a subadditive function to construct the value of the bundle. Multiple subregions of PFC including but not limited to vmPFC compute the value of a bundle with the same value code used to evaluate individual items, suggesting that these general value regions contextually adapt within this hierarchy. When valuing bundles and single items in interleaved trials, the value code rapidly switches between levels in this hierarchy by normalizing to the distribution of values in the current context rather than representing all options on an absolute scale.
Although the attribute integration model of value construction characterizes human behavior on simple decision-making tasks, it is unclear how it can scale up to environments of real-world complexity. Taking inspiration from modern advances in artificial intelligence, and deep reinforcement learning in particular, in Chapter 4 I outline how connectionist models generalize the attribute integration model to naturalistic tasks by decomposing sensory input into a high dimensional set of nonlinear features that are encoded with hierarchical and distributed processing. Participants freely played Atari video games during fMRI scanning, and a deep reinforcement learning algorithm trained on the games was used as an end-to-end model for how humans evaluate actions in these high-dimensional tasks. The features represented in the intermediate layers of the artificial neural network were found to also be encoded in a distributed fashion throughout the cortex, specifically in the dorsal visual stream and posterior parietal cortex. These features emerge from nonlinear transformations of the sensory input that connect perception to action and reward. In contrast to the stimulus attributes used to evaluate the stimuli presented in the preceding chapters, these features become highly complex and inscrutable as they are driven by the statistical properties of high-dimensional data. However, they do not solely reflect a set of features that can be identified by applying common dimensionality reduction techniques to the input, as task-irrelevant sensory features are stripped away and task-relevant high-level features are magnified.</p
- …