8,385 research outputs found
Machine learning in solar physics
The application of machine learning in solar physics has the potential to
greatly enhance our understanding of the complex processes that take place in
the atmosphere of the Sun. By using techniques such as deep learning, we are
now in the position to analyze large amounts of data from solar observations
and identify patterns and trends that may not have been apparent using
traditional methods. This can help us improve our understanding of explosive
events like solar flares, which can have a strong effect on the Earth
environment. Predicting hazardous events on Earth becomes crucial for our
technological society. Machine learning can also improve our understanding of
the inner workings of the sun itself by allowing us to go deeper into the data
and to propose more complex models to explain them. Additionally, the use of
machine learning can help to automate the analysis of solar data, reducing the
need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a
Living Review in Solar Physics (LRSP
A systematic literature review on source code similarity measurement and clone detection: techniques, applications, and challenges
Measuring and evaluating source code similarity is a fundamental software
engineering activity that embraces a broad range of applications, including but
not limited to code recommendation, duplicate code, plagiarism, malware, and
smell detection. This paper proposes a systematic literature review and
meta-analysis on code similarity measurement and evaluation techniques to shed
light on the existing approaches and their characteristics in different
applications. We initially found over 10000 articles by querying four digital
libraries and ended up with 136 primary studies in the field. The studies were
classified according to their methodology, programming languages, datasets,
tools, and applications. A deep investigation reveals 80 software tools,
working with eight different techniques on five application domains. Nearly 49%
of the tools work on Java programs and 37% support C and C++, while there is no
support for many programming languages. A noteworthy point was the existence of
12 datasets related to source code similarity measurement and duplicate codes,
of which only eight datasets were publicly accessible. The lack of reliable
datasets, empirical evaluations, hybrid methods, and focuses on multi-paradigm
languages are the main challenges in the field. Emerging applications of code
similarity measurement concentrate on the development phase in addition to the
maintenance.Comment: 49 pages, 10 figures, 6 table
Automatic Calibration and Error Correction for Large Language Models via Pareto Optimal Self-Supervision
Large language models (LLMs) have demonstrated remarkable capabilities out of
box for a wide range of applications, yet accuracy still remains a major growth
area, especially in mission-critical domains such as biomedicine. An effective
method to calibrate the confidence level on LLM responses is essential to
automatically detect errors and facilitate human-in-the-loop verification. An
important source of calibration signals stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every response, without any additional manual efforts. This is accomplished
by learning a harmonizer model to align LLM output with other available
supervision sources, which would assign higher risk scores to more uncertain
LLM responses and facilitate error correction. Experiments on standard relation
extraction tasks in biomedical and general domains demonstrate the promise of
this approach, with our proposed risk scores highly correlated with the real
error rate of LLMs. For the most uncertain test instances, dynamic prompting
based on our proposed risk scores results in significant accuracy improvement
for off-the-shelf LLMs, boosting GPT-3 results past state-of-the-art (SOTA)
weak supervision and GPT-4 results past SOTA supervised results on challenging
evaluation datasets
Modular lifelong machine learning
Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge.
Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand.
This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems.
First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures.
Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations.
Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods.
Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Constructing Tree-based Index for Efficient and Effective Dense Retrieval
Recent studies have shown that Dense Retrieval (DR) techniques can
significantly improve the performance of first-stage retrieval in IR systems.
Despite its empirical effectiveness, the application of DR is still limited. In
contrast to statistic retrieval models that rely on highly efficient inverted
index solutions, DR models build dense embeddings that are difficult to be
pre-processed with most existing search indexing systems. To avoid the
expensive cost of brute-force search, the Approximate Nearest Neighbor (ANN)
algorithm and corresponding indexes are widely applied to speed up the
inference process of DR models. Unfortunately, while ANN can improve the
efficiency of DR models, it usually comes with a significant price on retrieval
performance.
To solve this issue, we propose JTR, which stands for Joint optimization of
TRee-based index and query encoding. Specifically, we design a new unified
contrastive learning loss to train tree-based index and query encoder in an
end-to-end manner. The tree-based negative sampling strategy is applied to make
the tree have the maximum heap property, which supports the effectiveness of
beam search well. Moreover, we treat the cluster assignment as an optimization
problem to update the tree-based index that allows overlapped clustering. We
evaluate JTR on numerous popular retrieval benchmarks. Experimental results
show that JTR achieves better retrieval performance while retaining high system
efficiency compared with widely-adopted baselines. It provides a potential
solution to balance efficiency and effectiveness in neural retrieval system
designs.Comment: 10 pages, accepted at SIGIR 202
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
Spurious correlations that degrade model generalization or lead the model to
be right for the wrong reasons are one of the main robustness concerns for
real-world deployments. However, mitigating these correlations during
pre-training for large-scale models can be costly and impractical, particularly
for those without access to high-performance computing resources. This paper
proposes a novel approach to address spurious correlations during fine-tuning
for a given domain of interest. With a focus on multi-modal models (e.g.,
CLIP), the proposed method leverages different modalities in these models to
detect and explicitly set apart spurious attributes from the affected class,
achieved through a multi-modal contrastive loss function that expresses
spurious relationships through language. Our experimental results and in-depth
visualizations on CLIP show that such an intervention can effectively i)
improve the model's accuracy when spurious attributes are not present, and ii)
directs the model's activation maps towards the actual class rather than the
spurious attribute when present. In particular, on the Waterbirds dataset, our
algorithm achieved a worst-group accuracy 23% higher than ERM on CLIP with a
ResNet-50 backbone, and 32% higher on CLIP with a ViT backbone, while
maintaining the same average accuracy as ERM
An Improved eXplainable Point Cloud Classifier (XPCC)
Classification of objects from 3D point clouds has become an increasingly relevant task across many computer vision applications. However, few studies have investigated explainable methods. In this paper, a new prototype-based and explainable classification method called eXplainable Point Cloud Classifier (XPCC) is proposed. The XPCC method offers several advantages over previous explainable and non-explainable methods. First, the XPCC method uses local densities and global multivariate generative distributions. Therefore, the XPCC provides comprehensive and interpretable object-based classification. Furthermore, the proposed method is built on recursive calculations, thus, is computationally very efficient. Second, the model learns continuously without the need for complete re-training and is domain transferable. Third, the proposed XPCC expands on the underlying learning method, xDNN, and is specific to 3D. As such, three new layers are added to the original xDNN architecture: i) the 3D point cloud feature extraction, ii) the global compound prototype weighting, and iii) the SoftMax function. Experiments were performed with the ModelNet40 benchmark which demonstrated that XPCC is the only explainable point cloud classifier to increase classification accuracy relative to the base algorithm when applied to the same problem. Additionally, this paper proposes a novel prototype-based visual representation that provides model- and object-based explanations. The prototype objects are superimposed to create a prototypical class representation of their data density within the feature space, called the Compound Prototype Cloud. They allow a user to visualize the explainable aspects of the model and identify object regions that contribute to the classification in a human-understandable way
Recommended from our members
Meaning-Making Practices of Emergent Arabic–English Bilingual Kindergarten Children in Cairo
The number of British Schools in the Middle East and North Africa (MENA) region is growing. The National Curriculum of England is used by an increasing number of such schools. As well as exporting a culturally-specific curriculum, these schools usually adopt an ideology of monolingualism, thus potentially limiting communication for emergent bilinguals and failing to acknowledge the multiple ways of meaning-making.
Current studies of translanguaging are moving the focus to multimodal forms of communication as a resource for thinking and communicating (GarcÃa and Wei 2014, Wei 2018). Building on the work of Kress (1997, 2010) I explore pre-school emergent bilinguals’ wider signifying practices and create an analytical framework, which I call MMTL (multimodal translanguaging), used as a lens to illustrate meaning-making.
Valley Hill in Cairo, Egypt is a British school which encourages ‘English-only’ as the medium of instruction in the kindergarten. Using a case study methodology, this research explores the meaning-making practices of eight emergent bilingual children aged 3–4 during child-initiated play, later reduced to four in the thesis to provide a detailed multimodal analysis. The principal aim is to explore their speech, gaze, gesture, and their engagement (layout/position) with artefacts during play.
The findings of this study suggest that although there is an ‘English-only’ approach, these young emergent bilingual children are meaning-making in a variety of ways. Children are translanguaging but it is never in isolation from other modes of communication. Emergent bilinguals use a range of modes to mediate their understanding and communication with others. They use gesture, gaze, and artefacts alongside translingual practices to move meaning across to more accessible modes, enabling communication and understanding. The implications for schools should be to embrace such hybrid practices and for teachers to be more responsive to young children’s meaning-making to enable learning
- …