394 research outputs found
Electrical Conductivity as an Indicator of Milk Spoilage for Use in Biosensor Technology
Milk is characterised as a perishable food. It is vulnerable to microbial contamination and has a limited shelf life, even when stored in a cold environment. Rapid milk spoilage is a sustained problem that restrains the shelf life of milk, and it consistently burdens the global food waste. Thus, there is a continuous interest in seeking better means of milk quality control and management. Recently, the development of biosensing technology offers a potential solution for better managing strategies of milk quality. Biosensors have been developed from growing demand for a reliable, cost-effective and rapid chemical detection tool. Many disciplines including clinical medicine, food industry, and environment monitoring employ biosensors as analytical tools. In particular, the use of electrical conductivity (EC) as a biosensing approach has frequently been studied in the dairy sector. However, its application to milk spoilage has yet to be fully explored.
The scope of this study was to investigate the use of EC as a parameter to aid in the prediction of milk spoilage. A portable conductivity meter was used to measure the EC in milk; the total bacterial count (TBC), lactic acid (LA) concentration and pH were assessed using standard plate count methods, titratable acidity and digital pH meter, respectively. Commercial pasteurized skim and whole milk were used in the study. The variations of EC, TBC, LA concentration and pH were measured over an extended storage of milk that held at either 4 or 8℃ in the trial experiment. The change in EC was comparatively examined with the change of other measured parameters, and the interrelationship between EC and parameters was analysed by correlation analysis. In addition, several laboratory-controlled model systems were used to assess the impacts of every individual parameter on the change of EC. The results of trial and model systems were compared with each other.
The trial experiment showed that EC progressively increases with an increase in TBC, LA concentration and pH during spoilage of skim and whole milk under storing at 4 and 8℃. The change in EC was found to have moderate to strong correlations with the measured parameters in spoilt milk. A statistically significant difference in EC value was observed before the complete spoilage of milk, when either the flavour defects or textural changes occurred. Moreover, the model systems revealed that the increase in EC is proportional linear to an increased LA concentration and decreased pH. By comparing the results between trial experiment and model systems, it showed that LA approximately contributed one-quarter of the total proportion of changed EC in spoiled milk. Furthermore, a number of bacteria present in milk with more than 〖10〗^7 colony forming units (CFU)/ml significantly decreased the mean EC value of milk. In addition, the ‘best before date’ (BBD) underestimated the correct shelf life of milk at both 4 and 8℃.
The fixed nature of BBD restrains its use as a suitable indicator. In comparison, EC can be a potential alternative to predict milk spoilage. Since it is a direct measurement of spoilage of milk, and changes simultaneously with the growth of bacteria, production of LA and acidity in milk held at either the optimal (4℃) or the inappropriate (8℃) temperatures. Further investigations are needed to obtain a better understanding of the interrelationship between EC and milk spoilage preceding the valid application of biosensing technology
Recommended from our members
Leveraging Structures of the Data in Deep Learning
The performance of deep learning frameworks could be significantly improved through considering the particular underlying structures for each dataset. In this thesis, I summarize our three work about boosting the performance of deep learning models through leveraging structures of the data. In the first work, we theoretically justify that, for convolutional neural networks (CNNs), neighborhoods of a pixel should be redefined as its most correlated spatial locations, in order to achieve a lower generalization error. Based on the correlation pattern, we propose a data-driven approach to design multiple layers of different customized filter shapes by repeatedly solving lasso problems. In the second work, we address the problem of scale-invariance in deep learning. We propose ScaleNet to predict object scales. Through recursively applying ScaleNet and rescaling, pretrained deep networks can identify objects with scales significantly different from the training set. In the last work, we perform an extensive study on PointConv based frameworks to tackle the problems of scale \& rotation invariances in point cloud convolution. PointConv is a novel convolution operation that can be directly applied on point clouds, and achieves parity with 2D CNNs in terms of formulation and performance. It takes coordinates of points as inputs to generate corresponding weights for convolution. We identify two effective strategies -- first, for point clouds converted from regular 2D raster images, we replace the multi-layer perceptrons (MLPs) based weight function with much simpler cubic polynomials, and achieve more robustness and better performance than traditional 2D CNNs on MNIST dataset. Next, for 3D point clouds, we introduce a novel viewpoint-invariant (VI) descriptor utilizing geometric properties between a center point and its local neighbors, as the additional input to the weight function. Integrated with the VI descriptor, we not only significantly improve the robustness of PointConv but also achieve comparable or better performance in comparison to the state-of-the-art point-based approaches on both SemanticKITTI and ScanNet
Classifying COVID-19 vaccine narratives
Vaccine hesitancy is widespread, despite the government's information
campaigns and the efforts of the World Health Organisation (WHO). Categorising
the topics within vaccine-related narratives is crucial to understand the
concerns expressed in discussions and identify the specific issues that
contribute to vaccine hesitancy. This paper addresses the need for monitoring
and analysing vaccine narratives online by introducing a novel vaccine
narrative classification task, which categorises COVID-19 vaccine claims into
one of seven categories. Following a data augmentation approach, we first
construct a novel dataset for this new classification task, focusing on the
minority classes. We also make use of fact-checker annotated data. The paper
also presents a neural vaccine narrative classifier that achieves an accuracy
of 84% under cross-validation. The classifier is publicly available for
researchers and journalists.Comment: In Proceedings of the 14th International Conference on Recent
Advances in Natural Language Processing, 202
A sequence-based machine learning model for predicting antigenic distance for H3N2 influenza virus
IntroductionSeasonal influenza A H3N2 viruses are constantly changing, reducing the effectiveness of existing vaccines. As a result, the World Health Organization (WHO) needs to frequently update the vaccine strains to match the antigenicity of emerged H3N2 variants. Traditional assessments of antigenicity rely on serological methods, which are both labor-intensive and time-consuming. Although numerous computational models aim to simplify antigenicity determination, they either lack a robust quantitative linkage between antigenicity and viral sequences or focus restrictively on selected features.MethodsHere, we propose a novel computational method to predict antigenic distances using multiple features, including not only viral sequence attributes but also integrating four distinct categories of features that significantly affect viral antigenicity in sequences.ResultsThis method exhibits low error in virus antigenicity prediction and achieves superior accuracy in discerning antigenic drift. Utilizing this method, we investigated the evolution process of the H3N2 influenza viruses and identified a total of 21 major antigenic clusters from 1968 to 2022.DiscussionInterestingly, our predicted antigenic map aligns closely with the antigenic map generated with serological data. Thus, our method is a promising tool for detecting antigenic variants and guiding the selection of vaccine candidates
3D Cinemagraphy from a Single Image
We present 3D Cinemagraphy, a new technique that marries 2D image animation
with 3D photography. Given a single still image as input, our goal is to
generate a video that contains both visual content animation and camera motion.
We empirically find that naively combining existing 2D image animation and 3D
photography methods leads to obvious artifacts or inconsistent animation. Our
key insight is that representing and animating the scene in 3D space offers a
natural solution to this task. To this end, we first convert the input image
into feature-based layered depth images using predicted depth values, followed
by unprojecting them to a feature point cloud. To animate the scene, we perform
motion estimation and lift the 2D motion into the 3D scene flow. Finally, to
resolve the problem of hole emergence as points move forward, we propose to
bidirectionally displace the point cloud as per the scene flow and synthesize
novel views by separately projecting them into target image planes and blending
the results. Extensive experiments demonstrate the effectiveness of our method.
A user study is also conducted to validate the compelling rendering results of
our method.Comment: Accepted by CVPR 2023. Project page:
https://xingyi-li.github.io/3d-cinemagraphy
DoF-NeRF: Depth-of-Field Meets Neural Radiance Fields
Neural Radiance Field (NeRF) and its variants have exhibited great success on
representing 3D scenes and synthesizing photo-realistic novel views. However,
they are generally based on the pinhole camera model and assume all-in-focus
inputs. This limits their applicability as images captured from the real world
often have finite depth-of-field (DoF). To mitigate this issue, we introduce
DoF-NeRF, a novel neural rendering approach that can deal with shallow DoF
inputs and can simulate DoF effect. In particular, it extends NeRF to simulate
the aperture of lens following the principles of geometric optics. Such a
physical guarantee allows DoF-NeRF to operate views with different focus
configurations. Benefiting from explicit aperture modeling, DoF-NeRF also
enables direct manipulation of DoF effect by adjusting virtual aperture and
focus parameters. It is plug-and-play and can be inserted into NeRF-based
frameworks. Experiments on synthetic and real-world datasets show that,
DoF-NeRF not only performs comparably with NeRF in the all-in-focus setting,
but also can synthesize all-in-focus novel views conditioned on shallow DoF
inputs. An interesting application of DoF-NeRF to DoF rendering is also
demonstrated. The source code will be made available at
https://github.com/zijinwuzijin/DoF-NeRF.Comment: Accepted by ACMMM 202
Power-Line Extraction Method for UAV Point Cloud Based on Region Growing Algorithm
[Introduction] Since the power line has the characteristics of long transmission distance and a complex spatial environment, the UAV LiDAR point cloud technology can completely and efficiently obtain the geometric information of the power line and its surrounding spatial objects, and the existing supervised extraction and unsupervised extraction methods are deficient in point cloud data extraction in a large range of complex environments, according to the spatial environment characteristics of the main network and distribution network line point cloud data, a rapid extraction method of point cloud power line is proposed based on projection line characteristics and region growing algorithm. [Method] Firstly, in view of the characteristics that the overhead lines of the main network were usually higher than the surrounding spatial objects, the power lines were roughly extracted by the elevation histogram threshold method. Then, considering the characteristics that the vegetation canopy was higher than the distribution network line in the distribution network area, the KNN data points of the roughly extracted power line point cloud were obtained, and the point cloud was projected on the horizontal plane, and whether the point cloud was a power line point cloud was judged by the linear measurement of the point cloud. [Result] According to the existence of missing power line point clouds, all the power line point cloud clusters are obtained through a region growing mode, and on this basis, the catenary formula of each power line point cloud cluster is calculated through the catenary formula, and the point cloud with a fitting distance less than the threshold is merged as the same power line point cloud. [Conclusion] The proposed method aims at the problem of rapid power line extraction in inspection applications and overcomes the problem of power line point cloud missing and vegetation impact in the process of power line extraction, so this method can achieve power line point cloud extraction with high efficiency and accuracy
SymmNeRF: Learning to Explore Symmetry Prior for Single-View View Synthesis
We study the problem of novel view synthesis of objects from a single image.
Existing methods have demonstrated the potential in single-view view synthesis.
However, they still fail to recover the fine appearance details, especially in
self-occluded areas. This is because a single view only provides limited
information. We observe that manmade objects usually exhibit symmetric
appearances, which introduce additional prior knowledge. Motivated by this, we
investigate the potential performance gains of explicitly embedding symmetry
into the scene representation. In this paper, we propose SymmNeRF, a neural
radiance field (NeRF) based framework that combines local and global
conditioning under the introduction of symmetry priors. In particular, SymmNeRF
takes the pixel-aligned image features and the corresponding symmetric features
as extra inputs to the NeRF, whose parameters are generated by a hypernetwork.
As the parameters are conditioned on the image-encoded latent codes, SymmNeRF
is thus scene-independent and can generalize to new scenes. Experiments on
synthetic and real-world datasets show that SymmNeRF synthesizes novel views
with more details regardless of the pose transformation, and demonstrates good
generalization when applied to unseen objects. Code is available at:
https://github.com/xingyi-li/SymmNeRF.Comment: Accepted by ACCV 202
Make-It-4D: Synthesizing a Consistent Long-Term Dynamic Scene Video from a Single Image
We study the problem of synthesizing a long-term dynamic video from only a
single image. This is challenging since it requires consistent visual content
movements given large camera motions. Existing methods either hallucinate
inconsistent perpetual views or struggle with long camera trajectories. To
address these issues, it is essential to estimate the underlying 4D (including
3D geometry and scene motion) and fill in the occluded regions. To this end, we
present Make-It-4D, a novel method that can generate a consistent long-term
dynamic video from a single image. On the one hand, we utilize layered depth
images (LDIs) to represent a scene, and they are then unprojected to form a
feature point cloud. To animate the visual content, the feature point cloud is
displaced based on the scene flow derived from motion estimation and the
corresponding camera pose. Such 4D representation enables our method to
maintain the global consistency of the generated dynamic video. On the other
hand, we fill in the occluded regions by using a pretrained diffusion model to
inpaint and outpaint the input image. This enables our method to work under
large camera motions. Benefiting from our design, our method can be
training-free which saves a significant amount of training time. Experimental
results demonstrate the effectiveness of our approach, which showcases
compelling rendering results.Comment: accepted by ACM MM'2
- …