126 research outputs found
There is no Such Thing as a Sham Trust
The Court of Appeal decision in Official Assignee v Wilson is the leading New Zealand case on "sham trusts". Obiter, O'Regan and Robertson JJ held that for a sham trust to exist, the settlor and trustee must have a common intention to not create a trust. Post-Wilson, debate continues over the precise elements that render a trust a sham. The Law Commission suggested that the sham doctrine, as a means of analysing the validity of an express trust, may not be the best approach. A better starting point would be a return to the certainty of intention requirement. In arguing that the Law Commission's recommendation is correct, this article will discuss three legal issues: whether an express trust is a unilateral or bilateral transaction; whether the excluded evidence has always been part of the objective intention requirement; and whether the legislative and policy factors have made foreign trust law distinct from New Zealand trust law. Finally, this article will expand on the test proposed by the Law Commission
Joining the Aotearoa New Zealand Constitutional Debate: Constitutional Environmental Rights in our Future ‘Constitution’
In 2013, the Constitutional Advisory Panel invited New Zealanders to think about our vision of what New Zealand should look like in the future and to consider how our constitutional arrangements would support that vision. In response, New Zealanders have suggested the inclusion of an environmental protection regime in our future constitutional landscape. The author supports this prevailing opinion. This paper will use the experiences gained from international and regional human rights and environmental law treaties and other countries’ constitutions to explore the best model to achieve that goal. This comparative law analysis will identify the key theoretical and legal issues that must be addressed by Parliament to ensure the successful implementation and enforcement of an environmental protection regime through the courts. While international developments are important, any environmental constitutional framework must reflect New Zealand’s unique and distinctive history, environment, people, and cultural values. With this in mind, this paper will tentatively canvass a new environmental constitutional framework and lay foundations for further legal research and public debate
Interaction-driven topological phase diagram of twisted bilayer MoTe
Twisted bilayer MoTe is a promising platform to investigate the interplay
between topology and many-body interaction. We present a theoretical study of
its interaction-driven quantum phase diagrams based on a three-orbital model,
which can be viewed as a generalization of the Kane-Mele-Hubbard model with an
additional orbital and realistic Coulomb repulsion. We predict a cascade of
phase transitions tuned by the twist angle . At the hole filling factor
(one hole per moir\'e unit cell), the ground state can be in the
multiferroic phase with coexisting spontaneous layer polarization and
magnetism, the quantum anomalous Hall phase, and finally the topologically
trivial magnetic phases, as increases from to
. At , the ground state can have a second-order phase
transition between an antiferromagnetic phase and the quantum spin Hall phase
as passes through a critical value. The dependence of the phase
boundaries on model parameters such as the gate-to-sample distance, the
dielectric constant, and the moir\'e potential amplitude is examined. The
predicted phase diagrams can guide the search for topological phases in twisted
transition metal dichalcogenide homobilayers.Comment: 12 pages, 7 figures. Comments and Collaborations are Welcome
Mapping solar array location, size, and capacity using deep learning and overhead imagery
The effective integration of distributed solar photovoltaic (PV) arrays into
existing power grids will require access to high quality data; the location,
power capacity, and energy generation of individual solar PV installations.
Unfortunately, existing methods for obtaining this data are limited in their
spatial resolution and completeness. We propose a general framework for
accurately and cheaply mapping individual PV arrays, and their capacities, over
large geographic areas. At the core of this approach is a deep learning
algorithm called SolarMapper - which we make publicly available - that can
automatically map PV arrays in high resolution overhead imagery. We estimate
the performance of SolarMapper on a large dataset of overhead imagery across
three US cities in California. We also describe a procedure for deploying
SolarMapper to new geographic regions, so that it can be utilized by others. We
demonstrate the effectiveness of the proposed deployment procedure by using it
to map solar arrays across the entire US state of Connecticut (CT). Using these
results, we demonstrate that we achieve highly accurate estimates of total
installed PV capacity within each of CT's 168 municipal regions
Collaboration of Pre-trained Models Makes Better Few-shot Learner
Few-shot classification requires deep neural networks to learn generalized
representations only from limited training images, which is challenging but
significant in low-data regimes. Recently, CLIP-based methods have shown
promising few-shot performance benefited from the contrastive language-image
pre-training. Based on this point, we question if the large-scale pre-training
can alleviate the few-shot data deficiency and also assist the representation
learning by the pre-learned knowledge. In this paper, we propose CoMo, a
Collaboration of pre-trained Models that incorporates diverse prior knowledge
from various pre-training paradigms for better few-shot learning. Our CoMo
includes: CLIP's language-contrastive knowledge, DINO's vision-contrastive
knowledge, and DALL-E's language-generative knowledge. Specifically, CoMo works
in two aspects: few-shot data expansion and diverse knowledge ensemble. For
one, we generate synthetic images via zero-shot DALL-E to enrich the few-shot
training data without any manpower. For the other, we introduce a learnable
Multi-Knowledge Adapter (MK-Adapter) to adaptively blend the predictions from
CLIP and DINO. By such collaboration, CoMo can fully unleash the potential of
different pre-training methods and unify them to perform state-of-the-art for
few-shot classification. We conduct extensive experiments on 11 datasets to
demonstrate the superiority and generalization ability of our approach.Comment: 10 pages, 6 figure
SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension
Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability.Comment: Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Benc
Characterisation and functional analysis of the WIF1 gene and its role in hair follicle growth and development of the Angora rabbit
[EN] Growth and development of hair follicles (HF) is a complex and dynamic process in most mammals. As HF growth and development regulate rabbit wool yield, exploring the role of genes involved in HF growth and development may be relevant. In this study, the coding sequence of the Angora rabbit (Oryctolagus cuniculus) WIF1 gene was cloned. The length of the coding region sequence was found to be 1140 bp, which encodes 379 amino acids. Bioinformatics analysis indicated that the WIF1 protein was unstable, hydrophilic and located in the extracellular region, contained a putative signal peptide and exhibited a high homology in different mammals. Moreover, WIF1 was significantly downregulated in the high wool production in the Angora rabbit group. Overexpression and knockdown studies revealed that WIF1 regulates HF growth and development-related genes and proteins, such as LEF1 and CCND1. WIF1 activated β-catenin/TCF transcriptional activity, promoted cell apoptosis and inhibited cellular proliferation. These results indicate that WIF1 might be important for HF development. This study, therefore, provides a theoretical foundation for investigating WIF1 in HF growth and development.This research was funded by This research was funded by National Natural Science Foundation of China (Grant No. 32102529), China Agriculture Research System of MOF and MARA (CARS-43-A-1).Zhao, B.; Li, J.; Zhang, X.; Bao, Z.; Chen, Y.; Wu, X. (2022). Characterisation and functional analysis of the WIF1 gene and its role in hair follicle growth and development of the Angora rabbit. World Rabbit Science. 30(3):209-218. https://doi.org/10.4995/wrs.2022.1735320921830
SEED-Bench-2: Benchmarking Multimodal Large Language Models
Multimodal large language models (MLLMs), building upon the foundation of
powerful large language models (LLMs), have recently demonstrated exceptional
capabilities in generating not only texts but also images given interleaved
multimodal inputs (acting like a combination of GPT-4V and DALL-E 3). However,
existing MLLM benchmarks remain limited to assessing only models' comprehension
ability of single image-text inputs, failing to keep up with the strides made
in MLLMs. A comprehensive benchmark is imperative for investigating the
progress and uncovering the limitations of current MLLMs. In this work, we
categorize the capabilities of MLLMs into hierarchical levels from to
based on the modalities they can accept and generate, and propose
SEED-Bench-2, a comprehensive benchmark that evaluates the
\textbf{hierarchical} capabilities of MLLMs. Specifically, SEED-Bench-2
comprises 24K multiple-choice questions with accurate human annotations, which
spans 27 dimensions, including the evaluation of both text and image
generation. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 23 prominent open-source
MLLMs and summarize valuable observations. By revealing the limitations of
existing MLLMs through extensive evaluations, we aim for SEED-Bench-2 to
provide insights that will motivate future research towards the goal of General
Artificial Intelligence. Dataset and evaluation code are available at
\href{https://github.com/AILab-CVC/SEED-Bench}Comment: Project released at: https://github.com/AILab-CVC/SEED-Bench. arXiv
admin note: text overlap with arXiv:2307.1612
- …