Common deep learning models for 3D environment perception often use
pillarization/voxelization methods to convert point cloud data into
pillars/voxels and then process it with a 2D/3D convolutional neural network
(CNN). The pioneer work PointNet has been widely applied as a local feature
descriptor, a fundamental component in deep learning models for 3D perception,
to extract features of a point cloud. This is achieved by using a symmetric
max-pooling operator which provides unique pillar/voxel features. However, by
ignoring most of the points, the max-pooling operator causes an information
loss, which reduces the model performance. To address this issue, we propose a
novel local feature descriptor, mini-PointNetPlus, as an alternative for
plug-and-play to PointNet. Our basic idea is to separately project the data
points to the individual features considered, each leading to a permutation
invariant. Thus, the proposed descriptor transforms an unordered point cloud to
a stable order. The vanilla PointNet is proved to be a special case of our
mini-PointNetPlus. Due to fully utilizing the features by the proposed
descriptor, we demonstrate in experiment a considerable performance improvement
for 3D perception