158 research outputs found

    Path^3

    Get PDF
    Architectural space is political. The project is composed of multiple paintings connected by a wooden scaffold built to resemble the urban landscape. The paintings depict quotidian architectural spaces, presenting the typology of functional public spaces, ranging from the zoo and the courthouse to the library, the church, and the bus stop. Their physical characteristics reflect the culture, values, and governmental tactics of modern states, functioning as control techniques to regulate our social actions within the realm of normality. For example, in the painting about the classroom, the even distribution of desks with the same size reveals that academic institutions\u27 role of training students\u27 mind to be relatively uniform; meanwhile, in the painting about the church, its symmetrical design with an extremely high ceiling displays the religious space\u27s intention of constructing sacredness through creating impressive visual effects. Presenting those spaces in an abstract and minimalist form, I yearn to inspire audiences to interpret the spatial power dynamics in the paintings as they walk through the installation. While each painting represents a specific space, my project concerns the urban, social, and political systems that uphold all those spaces. The wooden scaffold that connects all the paintings, on one hand, resembles the ubiquitous grid in urban planning, and, on the other hand, projects a field condition, which is defined by the urban theorist Stan Allen as any formal or spatial matrix capable of unifying diverse elements while respecting the identity of each. This network of architectural space challenges audiences to view individual spaces in the context of a system, realizing the connections and relations among diverse elements and discursive agencies. As I experienced the COVID-19 outbreak while I working on the project, I reviewed its theme as a response to this pandemic. The damage of COVID-19 is not only harming individuals or any specific social component, but it is also harming the whole system that concerns everyone and every social component, including our stock market, medical system, and academic institutions. The cause of this damage is the failure of international cooperation in planning a systematical response coming from different fields. The European Union\u27s failure to help Italy at its beginning stages of contamination contributed to the unchecked spread of virus to the whole continent. The failure of many national governments to act upon scientists\u27 warnings caused the later disaster. The lack of international cooperation for a general defense strategy when the WHO and China warned other countries the danger of COVID-19 allowed the global pandemic. My project shares with the audience the value of viewing individual objects or events as parts of a system. For example, the actions that a single government or field take during the Covid-19 outbreaks can cause butterfly effects that affect many other states and fields, since they all connect to each other through a multidimensional system. Similarly, classrooms, libraries, and churches are not isolated places; instead, they exist within certain political structures and influence one another. As the audience views the paintings of various architectural typologies, they also see the structure and the joints that connect them all. Thus, we could mirror this way of perception to our political, cultural, and social realm to reveal the skeleton hidden under our knowledge

    In Art We Trust

    Get PDF

    Cross-modal and Cross-domain Knowledge Transfer for Label-free 3D Segmentation

    Full text link
    Current state-of-the-art point cloud-based perception methods usually rely on large-scale labeled data, which requires expensive manual annotations. A natural option is to explore the unsupervised methodology for 3D perception tasks. However, such methods often face substantial performance-drop difficulties. Fortunately, we found that there exist amounts of image-based datasets and an alternative can be proposed, i.e., transferring the knowledge in the 2D images to 3D point clouds. Specifically, we propose a novel approach for the challenging cross-modal and cross-domain adaptation task by fully exploring the relationship between images and point clouds and designing effective feature alignment strategies. Without any 3D labels, our method achieves state-of-the-art performance for 3D point cloud semantic segmentation on SemanticKITTI by using the knowledge of KITTI360 and GTA5, compared to existing unsupervised and weakly-supervised baselines.Comment: 12 pages,4 figures,accepte

    See More and Know More: Zero-shot Point Cloud Segmentation via Multi-modal Visual Data

    Full text link
    Zero-shot point cloud segmentation aims to make deep models capable of recognizing novel objects in point cloud that are unseen in the training phase. Recent trends favor the pipeline which transfers knowledge from seen classes with labels to unseen classes without labels. They typically align visual features with semantic features obtained from word embedding by the supervision of seen classes' annotations. However, point cloud contains limited information to fully match with semantic features. In fact, the rich appearance information of images is a natural complement to the textureless point cloud, which is not well explored in previous literature. Motivated by this, we propose a novel multi-modal zero-shot learning method to better utilize the complementary information of point clouds and images for more accurate visual-semantic alignment. Extensive experiments are performed in two popular benchmarks, i.e., SemanticKITTI and nuScenes, and our method outperforms current SOTA methods with 52% and 49% improvement on average for unseen class mIoU, respectively.Comment: Accepted by ICCV 202

    A variable corona during the transition from type-C to type-B quasi-periodic oscillations in the black hole X-ray binary MAXI J1820+070

    Full text link
    We analyze a Neutron Star Interior Composition Explorer (NICER) observation of the black hole X-ray binary MAXI J1820+070 during a transition from type-C to type-B quasi-periodic oscillations (QPOs). We find that below ~2 keV, for the type-B QPOs the rms amplitude is lower and the magnitude of the phase lags is larger than for the type-C QPOs. Above that energy, the rms and phase-lag spectra of the type-B and type-C QPOs are consistent with being the same. We perform a joint fit of the time-averaged spectra of the source, and the rms and phase-lag spectra of the QPOs with the time-dependent Comptonization model vkompth to study the geometry of the corona during the transition. We find that the data can be well-fitted with a model consisting of a small and a large corona that are physically connected. The sizes of the small and large coronae increase gradually during the type-C QPO phase whereas they decrease abruptly at the transition to type-B QPO. At the same time, the inner radius of the disc moves inward at the QPO transition. Combined with simultaneous radio observations showing that discrete jet ejections happen around the time of the QPO transition, we propose that a corona that expands horizontally during the type-C QPO phase, from ~10^{4} km (~800 Rg) to ~10^{5} km (~8000 Rg) overlying the accretion disc, transforms into a vertical jet-like corona extending over ~10^{4} km (~800 Rg) during the type-B QPO phase.Comment: 22 pages, 16 figures, 2 tables, accepted for publication in MNRA

    LiCamGait: Gait Recognition in the Wild by Using LiDAR and Camera Multi-modal Visual Sensors

    Full text link
    LiDAR can capture accurate depth information in large-scale scenarios without the effect of light conditions, and the captured point cloud contains gait-related 3D geometric properties and dynamic motion characteristics. We make the first attempt to leverage LiDAR to remedy the limitation of view-dependent and light-sensitive camera for more robust and accurate gait recognition. In this paper, we propose a LiDAR-camera-based gait recognition method with an effective multi-modal feature fusion strategy, which fully exploits advantages of both point clouds and images. In particular, we propose a new in-the-wild gait dataset, LiCamGait, involving multi-modal visual data and diverse 2D/3D representations. Our method achieves state-of-the-art performance on the new dataset. Code and dataset will be released when this paper is published
    • …
    corecore