91 research outputs found
Crafting Elastic Masculinity: Formations of Shenti, Intimacy and Kinship among Young Men in China
Under the ever-deepening transformations in contemporary China, traditional gender relations have been reshaped, but elements of patriarchy informed by the legacy of Confucianism still linger. These intricately interwoven forces have exerted a great impact on the gendered lives of the young generation. This research aims to examine young men’s views of Chinese manhood and how they construct and negotiate masculinities in their everyday lives. I conducted 30 semi-structured in-depth interviews with Chinese men aged between 22 and 32, who are mostly ordinary men in the middle social stratum in Shanghai and Shenyang. I regard Chinese men as actively negotiating their identities within particular stages of their life course. Overall, this thesis is informed by perspectives of relational selfhood and Confucian notions of the relational, reflexive, and embodied self that is an ongoing process of becoming. I bring indigenous concepts and cultural repertoires into critical dialogue with global and leading sociological theories of individualisation and reflexivity. Based on my analyses, I introduce and develop the concept of ‘elastic masculinity’. Specifically, I argue that the masculinity of ordinary young men is flexible, adaptable and accommodating. However, the term elastic masculinity also illustrates that it is limited by the availability of resources, structural constraints, cultural traditions and diverse personal relationships. Thus, elastic masculinity is an appropriate metaphor and an important concept to understand Chinese young men’s active engagement with China’s global modernity, increasing individualisation, shifting gender values and local realities
Online Targetless Radar-Camera Extrinsic Calibration Based on the Common Features of Radar and Camera
Sensor fusion is essential for autonomous driving and autonomous robots, and
radar-camera fusion systems have gained popularity due to their complementary
sensing capabilities. However, accurate calibration between these two sensors
is crucial to ensure effective fusion and improve overall system performance.
Calibration involves intrinsic and extrinsic calibration, with the latter being
particularly important for achieving accurate sensor fusion. Unfortunately,
many target-based calibration methods require complex operating procedures and
well-designed experimental conditions, posing challenges for researchers
attempting to reproduce the results. To address this issue, we introduce a
novel approach that leverages deep learning to extract a common feature from
raw radar data (i.e., Range-Doppler-Angle data) and camera images. Instead of
explicitly representing these common features, our method implicitly utilizes
these common features to match identical objects from both data sources.
Specifically, the extracted common feature serves as an example to demonstrate
an online targetless calibration method between the radar and camera systems.
The estimation of the extrinsic transformation matrix is achieved through this
feature-based approach. To enhance the accuracy and robustness of the
calibration, we apply the RANSAC and Levenberg-Marquardt (LM) nonlinear
optimization algorithm for deriving the matrix. Our experiments in the real
world demonstrate the effectiveness and accuracy of our proposed method
3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration
Advances in autonomous driving are inseparable from sensor fusion.
Heterogeneous sensors are widely used for sensor fusion due to their
complementary properties, with radar and camera being the most equipped
sensors. Intrinsic and extrinsic calibration are essential steps in sensor
fusion. The extrinsic calibration, independent of the sensor's own parameters,
and performed after the sensors are installed, greatly determines the accuracy
of sensor fusion. Many target-based methods require cumbersome operating
procedures and well-designed experimental conditions, making them extremely
challenging. To this end, we propose a flexible, easy-to-reproduce and accurate
method for extrinsic calibration of 3D radar and camera. The proposed method
does not require a specially designed calibration environment, and instead
places a single corner reflector (CR) on the ground to iteratively collect
radar and camera data simultaneously using Robot Operating System (ROS), and
obtain radar-camera point correspondences based on their timestamps, and then
use these point correspondences as input to solve the perspective-n-point (PnP)
problem, and finally get the extrinsic calibration matrix. Also, RANSAC is used
for robustness and the Levenberg-Marquardt (LM) nonlinear optimization
algorithm is used for accuracy. Multiple controlled environment experiments as
well as real-world experiments demonstrate the efficiency and accuracy (AED
error is 15.31 pixels and Acc up to 89\%) of the proposed method
Flexible bond wire capacitive strain sensor for a vehicle tyre
This thesis reports a novel flexible wire bond structured capacitive sensor design that can measure the strain in the tyres stably and reliably without any influence or disturbance to the tyre material during the measurement. An industry achievable fabrication method based on the design has been also investigated and it is also believed that there is a possibility of introducing the sensor into mass production.
Bond wire technology, laser machining technology and photolithography technology are adopted to fabricate the strain sensor, in which the wire bonding technology is the most significant process for this design. An array of 25 micrometer bond wires that are normally employed for electrical connections in integrated circuits is built to create an interdigitated structure and generating approximately 10pF capacitance. The array that in an approximately 8*8 mm area consists of 50 wire loops and creates 49 capacitor pairs. The aluminium wires are bonded to a flexible PCB which is specially finished to allow direct bonding to copper surface. The wire array is finally packaged and embedded in a flexible and compliant material, polydimethylsiloxane (PDMS), which acts as the structural material that is strained. The implementations of the bond wire, the flexible PCB and PDMS embedding minimize the stiffness of the strain sensor while the PDMS can also prevent the sensor from any potential damage. When a tensile strain occurs, the wires are stretched further apart reducing the capacitance. On the contrary, the wires move closer and increase the capacitance if the strain sensor is compressed. Different from the traditional interdigital capacitor, the capacitance of the device is almost in a linear relationship with respect to the strain, which can measure the strain up to at least ±60000 micro-strain (±6%) with the resolution of 111 micro-strain (0.01%)
Contemporary Chinese Queer Performance
Review of the book Contemporary Chinese Queer Performance, by Hongwei Bao
mmFall: Fall Detection using 4D MmWave Radar and a Hybrid Variational RNN AutoEncoder
In this paper we propose mmFall - a novel fall detection system, which
comprises of (i) the emerging millimeter-wave (mmWave) radar sensor to collect
the human body's point cloud along with the body centroid, and (ii) a
variational recurrent autoencoder (VRAE) to compute the anomaly level of the
body motion based on the acquired point cloud. A fall is claimed to have
occurred when the spike in anomaly level and the drop in centroid height occur
simultaneously. The mmWave radar sensor provides several advantages, such as
privacycompliance and high-sensitivity to motion, over the traditional sensing
modalities. However, (i) randomness in radar point cloud data and (ii)
difficulties in fall collection/labeling in the traditional supervised fall
detection approaches are the two main challenges. To overcome the randomness in
radar data, the proposed VRAE uses variational inference, a probabilistic
approach rather than the traditional deterministic approach, to infer the
posterior probability of the body's latent motion state at each frame, followed
by a recurrent neural network (RNN) to learn the temporal features of the
motion over multiple frames. Moreover, to circumvent the difficulties in fall
data collection/labeling, the VRAE is built upon an autoencoder architecture in
a semi-supervised approach, and trained on only normal activities of daily
living (ADL) such that in the inference stage the VRAE will generate a spike in
the anomaly level once an abnormal motion, such as fall, occurs. During the
experiment, we implemented the VRAE along with two other baselines, and tested
on the dataset collected in an apartment. The receiver operating characteristic
(ROC) curve indicates that our proposed model outperforms the other two
baselines, and achieves 98% detection out of 50 falls at the expense of just 2
false alarms.Comment: Preprint versio
mm-Pose: Real-Time Human Skeletal Posture Estimation using mmWave Radars and CNNs
In this paper, mm-Pose, a novel approach to detect and track human skeletons
in real-time using an mmWave radar, is proposed. To the best of the authors'
knowledge, this is the first method to detect >15 distinct skeletal joints
using mmWave radar reflection signals. The proposed method would find several
applications in traffic monitoring systems, autonomous vehicles, patient
monitoring systems and defense forces to detect and track human skeleton for
effective and preventive decision making in real-time. The use of radar makes
the system operationally robust to scene lighting and adverse weather
conditions. The reflected radar point cloud in range, azimuth and elevation are
first resolved and projected in Range-Azimuth and Range-Elevation planes. A
novel low-size high-resolution radar-to-image representation is also presented,
that overcomes the sparsity in traditional point cloud data and offers
significant reduction in the subsequent machine learning architecture. The RGB
channels were assigned with the normalized values of range, elevation/azimuth
and the power level of the reflection signals for each of the points. A forked
CNN architecture was used to predict the real-world position of the skeletal
joints in 3-D space, using the radar-to-image representation. The proposed
method was tested for a single human scenario for four primary motions, (i)
Walking, (ii) Swinging left arm, (iii) Swinging right arm, and (iv) Swinging
both arms to validate accurate predictions for motion in range, azimuth and
elevation. The detailed methodology, implementation, challenges, and validation
results are presented.Comment: Submitted to IEEE Sensors Journa
A novel direct power control for open-winding brushless doubly-fed reluctance generators fed by dual two-level converters using a common DC bus
A new direct power control (DPC) strategy for open-winding brushless doubly-fed reluctance generators (BDFRGs) with variable speed constant frequency is proposed. The control winding is open-circuited and fed by dual traditional two-level three phase converters using a common DC bus, and the DPC strategy aiming at maximum power point tracking and common mode voltage elimination is designed. Compared to the traditional three-level converter systems, the DC bus voltage, the voltage rating of power devices and capacity of the single two-level converter are all reduced by 50% while the reliability, redundancy and fault tolerance of the proposed system still greatly improved. Consequently its effectiveness is evaluated by simulation tests on a 42 kW prototype generator in MATLAB/SIMULINK
Multiple Patients Behavior Detection in Real-time using mmWave Radar and Deep CNNs
To address potential gaps noted in patient monitoring in the hospital, a
novel patient behavior detection system using mmWave radar and deep convolution
neural network (CNN), which supports the simultaneous recognition of multiple
patients' behaviors in real-time, is proposed. In this study, we use an mmWave
radar to track multiple patients and detect the scattering point cloud of each
one. For each patient, the Doppler pattern of the point cloud over a time
period is collected as the behavior signature. A three-layer CNN model is
created to classify the behavior for each patient. The tracking and point
clouds detection algorithm was also implemented on an mmWave radar hardware
platform with an embedded graphics processing unit (GPU) board to collect
Doppler pattern and run the CNN model. A training dataset of six types of
behavior were collected, over a long duration, to train the model using Adam
optimizer with an objective to minimize cross-entropy loss function. Lastly,
the system was tested for real-time operation and obtained a very good
inference accuracy when predicting each patient's behavior in a two-patient
scenario.Comment: This paper has been submitted to IEEE Radar Conference 201
- …