70 research outputs found
Productivity and farm size in Australian agriculture: reinvestigating the returns to scale
Higher productivity among large farms is often assumed to be a result of increasing returns to scale. However, using farm-level data for the Australian broadacre industry, it was found that constant or mildly decreasing returns to scale is more typical. On examining the monotonic change in marginal input returns as farm operating size increases, it was found that large farms achieve higher productivity through changes in production technology rather than through changes in scale. The results highlight the disparity between ‘returns to scale’ and ‘returns to size’ in Australian agriculture. They also suggest that improving productivity in smaller farms would depend more on their ability to access advanced technologies than their ability to simply expand. The implications for ongoing structural adjustment in Australian agriculture are discussed.returns to scale, returns to size, production function, technology progress, structural adjustment, Australian agriculture, Agricultural and Food Policy,
Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters
Adversarial attacks have been proven to be potential threats to Deep Neural
Networks (DNNs), and many methods are proposed to defend against adversarial
attacks. However, while enhancing the robustness, the clean accuracy will
decline to a certain extent, implying a trade-off existed between the accuracy
and robustness. In this paper, we firstly empirically find an obvious
distinction between standard and robust models in the filters' weight
distribution of the same architecture, and then theoretically explain this
phenomenon in terms of the gradient regularization, which shows this difference
is an intrinsic property for DNNs, and thus a static network architecture is
difficult to improve the accuracy and robustness at the same time. Secondly,
based on this observation, we propose a sample-wise dynamic network
architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on
dealing with clean and adversarial examples with a ``divide and rule" weight
strategy. The AW-Net dynamically adjusts network's weights based on regulation
signals generated by an adversarial detector, which is directly influenced by
the input sample. Benefiting from the dynamic network architecture, clean and
adversarial examples can be processed with different network weights, which
provides the potentiality to enhance the accuracy and robustness
simultaneously. A series of experiments demonstrate that our AW-Net is
architecture-friendly to handle both clean and adversarial examples and can
achieve better trade-off performance than state-of-the-art robust models
Boosting Adversarial Transferability with Learnable Patch-wise Masks
Adversarial examples have raised widespread attention in security-critical
applications because of their transferability across different models. Although
many methods have been proposed to boost adversarial transferability, a gap
still exists in the practical demand. In this paper, we argue that the
model-specific discriminative regions are a key factor to cause the
over-fitting to the source model, and thus reduce the transferability to the
target model. For that, a patch-wise mask is utilized to prune the
model-specific regions when calculating adversarial perturbations. To
accurately localize these regions, we present a learnable approach to optimize
the mask automatically. Specifically, we simulate the target models in our
framework, and adjust the patch-wise mask according to the feedback of
simulated models. To improve the efficiency, Differential Evolutionary (DE)
algorithm is utilized to search for patch-wise masks for a specific image.
During iterative attacks, the learned masks are applied to the image to drop
out the patches related to model-specific regions, thus making the gradients
more generic and improving the adversarial transferability. The proposed
approach is a pre-processing method and can be integrated with existing
gradient-based methods to further boost the transfer attack success rate.
Extensive experiments on the ImageNet dataset demonstrate the effectiveness of
our method. We incorporate the proposed approach with existing methods in the
ensemble attacks and achieve an average success rate of 93.01% against seven
advanced defense methods, which can effectively enhance the state-of-the-art
transfer-based attack performance
Mitigating the Accuracy-Robustness Trade-off via Multi-Teacher Adversarial Distillation
Adversarial training is a practical approach for improving the robustness of
deep neural networks against adversarial attacks. Although bringing reliable
robustness, the performance toward clean examples is negatively affected after
adversarial training, which means a trade-off exists between accuracy and
robustness. Recently, some studies have tried to use knowledge distillation
methods in adversarial training, achieving competitive performance in improving
the robustness but the accuracy for clean samples is still limited. In this
paper, to mitigate the accuracy-robustness trade-off, we introduce the
Multi-Teacher Adversarial Robustness Distillation (MTARD) to guide the model's
adversarial training process by applying a strong clean teacher and a strong
robust teacher to handle the clean examples and adversarial examples,
respectively. During the optimization process, to ensure that different
teachers show similar knowledge scales, we design the Entropy-Based Balance
algorithm to adjust the teacher's temperature and keep the teachers'
information entropy consistent. Besides, to ensure that the student has a
relatively consistent learning speed from multiple teachers, we propose the
Normalization Loss Balance algorithm to adjust the learning weights of
different types of knowledge. A series of experiments conducted on public
datasets demonstrate that MTARD outperforms the state-of-the-art adversarial
training and distillation methods against various adversarial attacks
Economic reform and the efficiency of Chinese state enterprises : with special reference to energy utilisation
This thesis examines the impact of economic reform on the technical and allocative
efficiency of Chinese state enterprises, using survey data for 1980 and for 1984-88.
Market-oriented economic reform of state enterprises began in the late 1970s and
continued throughout the 1980s. The thesis focuses on two important aspects of reform:
the process by which the central planning system was replaced with an open market
system; and changes in institutions governing the financial relationship of enterprises and
government, the financing of capital and the management of labour. Both aspects were
necessary if state enterprises were to operate efficiently in a competitive market
environment. The primary purpose of the thesis is to explore the impact of the reforms
on the efficiency with which factor inputs, including energy, are used in a sample of state
enterprises in China through the reform period.
The reform was seriously flawed, particularly in the area of the financial relationship
between enterprises and the govemment and banks. The efficiency of state enterprises is
analysed within the framework of Komai's soft budget constraint hypothesis, where the
key to improving technical and allocative efficiency is to harden sufficiently the budget
constraint of state enterprises. According to Komai, at least three major factors
contribute to the soft budget constraint of state enterprises in a centrally planned
economy: soft prices, soft tax and soft credit. The thesis investigates in detail
institutional change in these areas, assesses its impact on the behaviour of sample
enterprises and estimates quantitatively its impact on technical and allocative efficiency.
Technical efficiency is defined as the gap between maximum potential output and
actual output assessed at a given level of inputs; allocative efficiency is defined as the
deviation of input mix from the optimal expansion path assessed at the prevailing relative
price of inputs. In estimating technical efficiency, the stochastic production frontier, a
relatively new econometric approach, is adopted. As existing models are found to be
inappropriate to address the issues in the study, a modified model is used to estimate the
level of technical efficiency achieved by each enterprise. This model is also used to
estimate allocative efficiency.
The analysis indicates that, even after economic reform, the budget constraint of
state enterprises was unduly soft. The estimation indicates that both technical and
allocative efficiency have improved since 1980. Further estimation suggests that the
commodity market and labour management reform were successful, contributing on
average over 60 per cent of the improvement found in technical efficiency during the
period studied. Due to a continuing soft budget constraint, reform in the areas of
taxation and financing of capital did not have a positive impact on technical efficiency. The estimates of allocative efficiency indicate that capital expansion in state enterprises
was too rapid to be economically rational — another sign of a soft budget constraint.
In this study, I incorporate energy into the stochastic production function as an
independent input for estimation. The Chinese economy has been characterised by
scarcity and inefficient use of energy resources. The industrial sector has always been
China's largest and least efficient consumer of energy. Efficiency in the use of energy
therefore deserves attention. The efficiency of energy utilisation in state enterprises is
discussed in the framework of Komai's soft budget constraint
Based on the empirical results, the thesis discusses the implications of marketoriented
reform and institutional distortions for China's long-term economic growth, the
fragile natural environment and future reform policies
Preventing Unauthorized AI Over-Analysis by Medical Image Adversarial Watermarking
The advancement of deep learning has facilitated the integration of
Artificial Intelligence (AI) into clinical practices, particularly in
computer-aided diagnosis. Given the pivotal role of medical images in various
diagnostic procedures, it becomes imperative to ensure the responsible and
secure utilization of AI techniques. However, the unauthorized utilization of
AI for image analysis raises significant concerns regarding patient privacy and
potential infringement on the proprietary rights of data custodians.
Consequently, the development of pragmatic and cost-effective strategies that
safeguard patient privacy and uphold medical image copyrights emerges as a
critical necessity. In direct response to this pressing demand, we present a
pioneering solution named Medical Image Adversarial watermarking (MIAD-MARK).
Our approach introduces watermarks that strategically mislead unauthorized AI
diagnostic models, inducing erroneous predictions without compromising the
integrity of the visual content. Importantly, our method integrates an
authorization protocol tailored for legitimate users, enabling the removal of
the MIAD-MARK through encryption-generated keys. Through extensive experiments,
we validate the efficacy of MIAD-MARK across three prominent medical image
datasets. The empirical outcomes demonstrate the substantial impact of our
approach, notably reducing the accuracy of standard AI diagnostic models to a
mere 8.57% under white box conditions and 45.83% in the more challenging black
box scenario. Additionally, our solution effectively mitigates unauthorized
exploitation of medical images even in the presence of sophisticated watermark
removal networks. Notably, those AI diagnosis networks exhibit a meager average
accuracy of 38.59% when applied to images protected by MIAD-MARK, underscoring
the robustness of our safeguarding mechanism
EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones
The superior performance of modern deep networks usually comes with a costly
training procedure. This paper presents a new curriculum learning approach for
the efficient training of visual backbones (e.g., vision Transformers). Our
work is inspired by the inherent learning dynamics of deep networks: we
experimentally show that at an earlier training stage, the model mainly learns
to recognize some 'easier-to-learn' discriminative patterns within each
example, e.g., the lower-frequency components of images and the original
information before data augmentation. Driven by this phenomenon, we propose a
curriculum where the model always leverages all the training data at each
epoch, while the curriculum starts with only exposing the 'easier-to-learn'
patterns of each example, and introduces gradually more difficult patterns. To
implement this idea, we 1) introduce a cropping operation in the Fourier
spectrum of the inputs, which enables the model to learn from only the
lower-frequency components efficiently, 2) demonstrate that exposing the
features of original images amounts to adopting weaker data augmentation, and
3) integrate 1) and 2) and design a curriculum learning schedule with a
greedy-search algorithm. The resulting approach, EfficientTrain, is simple,
general, yet surprisingly effective. As an off-the-shelf method, it reduces the
wall-time training cost of a wide variety of popular models (e.g., ResNet,
ConvNeXt, DeiT, PVT, Swin, and CSWin) by >1.5x on ImageNet-1K/22K without
sacrificing accuracy. It is also effective for self-supervised learning (e.g.,
MAE). Code is available at https://github.com/LeapLabTHU/EfficientTrain.Comment: ICCV 202
Avalon's Game of Thoughts: Battle Against Deception through Recursive Contemplation
Recent breakthroughs in large language models (LLMs) have brought remarkable
success in the field of LLM-as-Agent. Nevertheless, a prevalent assumption is
that the information processed by LLMs is consistently honest, neglecting the
pervasive deceptive or misleading information in human society and AI-generated
content. This oversight makes LLMs susceptible to malicious manipulations,
potentially resulting in detrimental outcomes. This study utilizes the
intricate Avalon game as a testbed to explore LLMs' potential in deceptive
environments. Avalon, full of misinformation and requiring sophisticated logic,
manifests as a "Game-of-Thoughts". Inspired by the efficacy of humans'
recursive thinking and perspective-taking in the Avalon game, we introduce a
novel framework, Recursive Contemplation (ReCon), to enhance LLMs' ability to
identify and counteract deceptive information. ReCon combines formulation and
refinement contemplation processes; formulation contemplation produces initial
thoughts and speech, while refinement contemplation further polishes them.
Additionally, we incorporate first-order and second-order perspective
transitions into these processes respectively. Specifically, the first-order
allows an LLM agent to infer others' mental states, and the second-order
involves understanding how others perceive the agent's mental state. After
integrating ReCon with different LLMs, extensive experiment results from the
Avalon game indicate its efficacy in aiding LLMs to discern and maneuver around
deceptive information without extra fine-tuning and data. Finally, we offer a
possible explanation for the efficacy of ReCon and explore the current
limitations of LLMs in terms of safety, reasoning, speaking style, and format,
potentially furnishing insights for subsequent research.Comment: 40 page
- …