255 research outputs found
Removal of copper ions by modified Unye clay, Turkey
WOS: 000260942400006PubMed ID: 18375056This paper presents the adsorption of Cu(II) from aqueous solution on modified Unye bentonite. Adsorption of Cu(II) by manganase oxide modified bentonite (MMB) sample was investigated as a function of the initial Cu(II) concentration, solution pH, ionic strength, temperature and inorganic ligands (Cl-, SO42-, HPO42-). Changes in the surfaces and structure were characterized using X-ray diffraction (XRD), infrared (IR) spectroscopy, N-2 gas adsorption and potentiometric titration data. The adsorption properties of raw bentonite (RB) were further improved by modification with manganese oxide. Langmuir monolayer adsorption capacity of the MMB (105.38 mg/g) was found greater than that of the raw bentonite (42.41 mg/g). The spontaneity of the adsorption process is established by decrease in Delta G which varied from -4.68 to -5.10 kJ mol(-1) in temperature range 303-313 K. The high performance exhibited by MMB was attributed to increased surface area and higher negative surface charge after modification. (c) 2008 Elsevier B.V. All rights reserved
SalsaNet: Fast Road and Vehicle Segmentation in LiDAR Point Clouds for Autonomous Driving
In this paper, we introduce a deep encoder-decoder network, named SalsaNet,
for efficient semantic segmentation of 3D LiDAR point clouds. SalsaNet segments
the road, i.e. drivable free-space, and vehicles in the scene by employing the
Bird-Eye-View (BEV) image projection of the point cloud. To overcome the lack
of annotated point cloud data, in particular for the road segments, we
introduce an auto-labeling process which transfers automatically generated
labels from the camera to LiDAR. We also explore the role of imagelike
projection of LiDAR data in semantic segmentation by comparing BEV with
spherical-front-view projection and show that SalsaNet is projection-agnostic.
We perform quantitative and qualitative evaluations on the KITTI dataset, which
demonstrate that the proposed SalsaNet outperforms other state-of-the-art
semantic segmentation networks in terms of accuracy and computation time. Our
code and data are publicly available at
https://gitlab.com/aksoyeren/salsanet.git
Learning the Semantics of Manipulation Action
In this paper we present a formal computational framework for modeling
manipulation actions. The introduced formalism leads to semantics of
manipulation action and has applications to both observing and understanding
human manipulation actions as well as executing them with a robotic mechanism
(e.g. a humanoid robot). It is based on a Combinatory Categorial Grammar. The
goal of the introduced framework is to: (1) represent manipulation actions with
both syntax and semantic parts, where the semantic part employs
-calculus; (2) enable a probabilistic semantic parsing schema to learn
the -calculus representation of manipulation action from an annotated
action corpus of videos; (3) use (1) and (2) to develop a system that visually
observes manipulation actions and understands their meaning while it can reason
beyond observations using propositional logic and axiom schemata. The
experiments conducted on a public available large manipulation action dataset
validate the theoretical framework and our implementation
Depth- and Semantics-aware Multi-modal Domain Translation: Generating 3D Panoramic Color Images from LiDAR Point Clouds
This work presents a new depth- and semantics-aware conditional generative
model, named TITAN-Next, for cross-domain image-to-image translation in a
multi-modal setup between LiDAR and camera sensors. The proposed model
leverages scene semantics as a mid-level representation and is able to
translate raw LiDAR point clouds to RGB-D camera images by solely relying on
semantic scene segments. We claim that this is the first framework of its kind
and it has practical applications in autonomous vehicles such as providing a
fail-safe mechanism and augmenting available data in the target image domain.
The proposed model is evaluated on the large-scale and challenging
Semantic-KITTI dataset, and experimental findings show that it considerably
outperforms the original TITAN-Net and other strong baselines by 23.7
margin in terms of IoU
Semantics-aware LiDAR-Only Pseudo Point Cloud Generation for 3D Object Detection
Although LiDAR sensors are crucial for autonomous systems due to providing
precise depth information, they struggle with capturing fine object details,
especially at a distance, due to sparse and non-uniform data. Recent advances
introduced pseudo-LiDAR, i.e., synthetic dense point clouds, using additional
modalities such as cameras to enhance 3D object detection. We present a novel
LiDAR-only framework that augments raw scans with denser pseudo point clouds by
solely relying on LiDAR sensors and scene semantics, omitting the need for
cameras. Our framework first utilizes a segmentation model to extract scene
semantics from raw point clouds, and then employs a multi-modal domain
translator to generate synthetic image segments and depth cues without real
cameras. This yields a dense pseudo point cloud enriched with semantic
information. We also introduce a new semantically guided projection method,
which enhances detection performance by retaining only relevant pseudo points.
We applied our framework to different advanced 3D object detection methods and
reported up to 2.9% performance upgrade. We also obtained comparable results on
the KITTI 3D object detection dataset, in contrast to other state-of-the-art
LiDAR-only detectors
Comparison of the Efficacy and Safety of Insulin Glargine and Insulin Detemir with NPH Insulin in Children and Adolescents with Type 1 Diabetes Mellitus Receiving Intensive Insulin Therapy
Objective: The purpose of this study was to compare the efficacy and safety of insulin glargine and detemir with NPH insulin in children and adolescents with type 1 diabetes mellitus (DM)
FIVA: Facial Image and Video Anonymization and Anonymization Defense
In this paper, we present a new approach for facial anonymization in images
and videos, abbreviated as FIVA. Our proposed method is able to maintain the
same face anonymization consistently over frames with our suggested
identity-tracking and guarantees a strong difference from the original face.
FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work
considers the important security issue of reconstruction attacks and
investigates adversarial noise, uniform noise, and parameter noise to disrupt
reconstruction attacks. In this regard, we apply different defense and
protection methods against these privacy threats to demonstrate the scalability
of FIVA. On top of this, we also show that reconstruction attack models can be
used for detection of deep fakes. Last but not least, we provide experimental
results showing how FIVA can even enable face swapping, which is purely trained
on a single target image.Comment: Accepted to ICCVW 2023 - DFAD 202
- …