236 research outputs found

    Electronic sensor technologies in monitoring quality of tea: A review

    Get PDF
    Tea, after water, is the most frequently consumed beverage in the world. The fermentation of tea leaves has a pivotal role in its quality and is usually monitored using the laboratory analytical instruments and olfactory perception of tea tasters. Developing electronic sensing platforms (ESPs), in terms of an electronic nose (e-nose), electronic tongue (e-tongue), and electronic eye (e-eye) equipped with progressive data processing algorithms, not only can accurately accelerate the consumer-based sensory quality assessment of tea, but also can define new standards for this bioactive product, to meet worldwide market demand. Using the complex data sets from electronic signals integrated with multivariate statistics can, thus, contribute to quality prediction and discrimination. The latest achievements and available solutions, to solve future problems and for easy and accurate real-time analysis of the sensory-chemical properties of tea and its products, are reviewed using bio-mimicking ESPs. These advanced sensing technologies, which measure the aroma, taste, and color profiles and input the data into mathematical classification algorithms, can discriminate different teas based on their price, geographical origins, harvest, fermentation, storage times, quality grades, and adulteration ratio. Although voltammetric and fluorescent sensor arrays are emerging for designing e-tongue systems, potentiometric electrodes are more often employed to monitor the taste profiles of tea. The use of a feature-level fusion strategy can significantly improve the efficiency and accuracy of prediction models, accompanied by the pattern recognition associations between the sensory properties and biochemical profiles of tea

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Objective Assessment of Area and Erythema of Psoriasis Lesion Using Digital Imaging and Colourimetry

    Get PDF
    Psoriasis is a non-contagious skin disease which typically consists of red plaques covered by silvery-white scales. It affects about 3% of world population. During treatment, dermatologists monitor the extent of psoriasis continuously to ascertain treatment efficacy. Psoriasis Area and Severity Index (PAS!) is the current gold standard method used to assess the extent of psoriasis. In PAS!, there are four parameters to be scored i.e., the surface area affected, erythema (redness), thickness and scaliness of the plaques. Determining PAS! score is a tedious task and thus it is not used in daily clinical practice. In addition, the PAS! parameters are visually determined and may result in intra-observer and inter-observer variations, even by experienced dermatologists. Objective methods in assessing area and erythema of psoriasis lesion have been developed in this thesis. Psoriasis lesion can be recognized by its colour dissimilarity with normal skin. Colour dissimilarity is represented by colour difference in CIELAB colour space, a widely used colour space to measure colour dissimilarity. Each pixel in CIELAB colour space can be represented by its lightness (L'), hue (hob), and chroma (Cab). Colour difference between psoriasis lesion and normal skin is analyzed in hue-chroma plane of CIELAB colour space. Centroids of normal skin and lesion in hue-chroma space are obtained from selected samples. Euclidean distances between all pixels with these two centroids are then calculated. Each pixel is assigned to the class of the nearest centroid. The erythema of psoriasis lesion is affected by degree of severity and skin pigmentation. In order to assess the erythema objectively, patients are grouped according to their skin pigmentation level. The L* value of normal skin which represents skin pigmentation level is utilized to group the patient into the three skin types namely fair, brown and dark skin types. Light difference (t.L*), hue difference (t.hab), and chroma difference (t.C'ab) of CIELAB colour space between reference lesions and the surrounding normal skin are analyzed. It is found that the erythema score of a lesion can be determined by their hue difference (t.hab) value within a particular skin type group. Out of 30 body regions, the proposed method is able to give the same PAS! area score as reference for 28 body regions. The proposed method is able to determine PAS! erythema score of 82 lesions obtained from 22 patients objectively without being influenced by other characteristic of the lesion such as area, pattern, and boundary

    Proof-of-Concept

    Get PDF
    Biometry is an area in great expansion and is considered as possible solution to cases where high authentication parameters are required. Although this area is quite advanced in theoretical terms, using it in practical terms still carries some problems. The systems available still depend on a high cooperation level to achieve acceptable performance levels, which was the backdrop to the development of the following project. By studying the state of the art, we propose the creation of a new and less cooperative biometric system that reaches acceptable performance levels.A constante necessidade de parâmetros mais elevados de segurança, nomeadamente ao nível de autenticação, leva ao estudo biometria como possível solução. Actualmente os mecanismos existentes nesta área tem por base o conhecimento de algo que se sabe ”password” ou algo que se possui ”codigo Pin”. Contudo este tipo de informação é facilmente corrompida ou contornada. Desta forma a biometria é vista como uma solução mais robusta, pois garante que a autenticação seja feita com base em medidas físicas ou compartimentais que definem algo que a pessoa é ou faz (”who you are” ou ”what you do”). Sendo a biometria uma solução bastante promissora na autenticação de indivíduos, é cada vez mais comum o aparecimento de novos sistemas biométricos. Estes sistemas recorrem a medidas físicas ou comportamentais, de forma a possibilitar uma autenticação (reconhecimento) com um grau de certeza bastante considerável. O reconhecimento com base no movimento do corpo humano (gait), feições da face ou padrões estruturais da íris, são alguns exemplos de fontes de informação em que os sistemas actuais se podem basear. Contudo, e apesar de provarem um bom desempenho no papel de agentes de reconhecimento autónomo, ainda estão muito dependentes a nível de cooperação exigida. Tendo isto em conta, e tudo o que já existe no ramo do reconhecimento biometrico, esta área está a dar passos no sentido de tornar os seus métodos o menos cooperativos poss??veis. Possibilitando deste modo alargar os seus objectivos para além da mera autenticação em ambientes controlados, para casos de vigilância e controlo em ambientes não cooperativos (e.g. motins, assaltos, aeroportos). É nesta perspectiva que o seguinte projecto surge. Através do estudo do estado da arte, pretende provar que é possível criar um sistema capaz de agir perante ambientes menos cooperativos, sendo capaz de detectar e reconhecer uma pessoa que se apresente ao seu alcance.O sistema proposto PAIRS (Periocular and Iris Recognition Systema) tal como nome indica, efectua o reconhecimento através de informação extraída da íris e da região periocular (região circundante aos olhos). O sistema é construído com base em quatro etapas: captura de dados, pré-processamento, extração de características e reconhecimento. Na etapa de captura de dados, foi montado um dispositivo de aquisição de imagens com alta resolução com a capacidade de capturar no espectro NIR (Near-Infra-Red). A captura de imagens neste espectro tem como principal linha de conta, o favorecimento do reconhecimento através da íris, visto que a captura de imagens sobre o espectro visível seria mais sensível a variações da luz ambiente. Posteriormente a etapa de pré-processamento implementada, incorpora todos os módulos do sistema responsáveis pela detecção do utilizador, avaliação de qualidade de imagem e segmentação da íris. O modulo de detecção é responsável pelo desencadear de todo o processo, uma vez que esta é responsável pela verificação da exist?ncia de um pessoa em cena. Verificada a sua exist?ncia, são localizadas as regiões de interesse correspondentes ? íris e ao periocular, sendo também verificada a qualidade com que estas foram adquiridas. Concluídas estas etapas, a íris do olho esquerdo é segmentada e normalizada. Posteriormente e com base em vários descritores, é extraída a informação biométrica das regiões de interesse encontradas, e é criado um vector de características biométricas. Por fim, é efectuada a comparação dos dados biometricos recolhidos, com os já armazenados na base de dados, possibilitando a criação de uma lista com os níveis de semelhança em termos biometricos, obtendo assim um resposta final do sistema. Concluída a implementação do sistema, foi adquirido um conjunto de imagens capturadas através do sistema implementado, com a participação de um grupo de voluntários. Este conjunto de imagens permitiu efectuar alguns testes de desempenho, verificar e afinar alguns parâmetros, e proceder a optimização das componentes de extração de características e reconhecimento do sistema. Analisados os resultados foi possível provar que o sistema proposto tem a capacidade de exercer as suas funções perante condições menos cooperativas

    Advances in non-destructive early assessment of fruit ripeness towards defining optimal time of harvest and yield prediction—a review

    Get PDF
    © 2018 by the authors. Licensee MDPI, Basel, Switzerland. Global food security for the increasing world population not only requires increased sustainable production of food but a significant reduction in pre-and post-harvest waste. The timing of when a fruit is harvested is critical for reducing waste along the supply chain and increasing fruit quality for consumers. The early in-field assessment of fruit ripeness and prediction of the harvest date and yield by non-destructive technologies have the potential to revolutionize farming practices and enable the consumer to eat the tastiest and freshest fruit possible. A variety of non-destructive techniques have been applied to estimate the ripeness or maturity but not all of them are applicable for in situ (field or glasshousassessment. This review focuses on the non-destructive methods which are promising for, or have already been applied to, the pre-harvest in-field measurements including colorimetry, visible imaging, spectroscopy and spectroscopic imaging. Machine learning and regression models used in assessing ripeness are also discussed

    Text Segmentation in Web Images Using Colour Perception and Topological Features

    Get PDF
    The research presented in this thesis addresses the problem of Text Segmentation in Web images. Text is routinely created in image form (headers, banners etc.) on Web pages, as an attempt to overcome the stylistic limitations of HTML. This text however, has a potentially high semantic value in terms of indexing and searching for the corresponding Web pages. As current search engine technology does not allow for text extraction and recognition in images, the text in image form is ignored. Moreover, it is desirable to obtain a uniform representation of all visible text of a Web page (for applications such as voice browsing or automated content analysis). This thesis presents two methods for text segmentation in Web images using colour perception and topological features. The nature of Web images and the implicit problems to text segmentation are described, and a study is performed to assess the magnitude of the problem and establish the need for automated text segmentation methods. Two segmentation methods are subsequently presented: the Split-and-Merge segmentation method and the Fuzzy segmentation method. Although approached in a distinctly different way in each method, the safe assumption that a human being should be able to read the text in any given Web Image is the foundation of both methods’ reasoning. This anthropocentric character of the methods along with the use of topological features of connected components, comprise the underlying working principles of the methods. An approach for classifying the connected components resulting from the segmentation methods as either characters or parts of the background is also presented

    CHARACTERIZATION OF ENGINEERED SURFACES

    Get PDF
    In the recent years there has been an increasing interest in manufacturing products where surface topography plays a functional role. These surfaces are called engineered surfaces and are used in a variety of industries like semi conductor, data storage, micro- optics, MEMS etc. Engineered products are designed, manufactured and inspected to meet a variety of specifications such as size, position, geometry and surface finish to control the physical, chemical, optical and electrical properties of the surface. As the manufacturing industry strive towards shrinking form factor resulting in miniaturization of surface features, measurement of such micro and nanometer scale surfaces is becoming more challenging. Great strides have been made in the area of instrumentation to capture surface data, but the area of algorithms and procedures to determine form, size and orientation information of surface features still lacks the advancement needed to support the characterization requirements of R&D and high volume manufacturing. This dissertation addresses the development of fast and intelligent surface scanning algorithms and methodologies for engineered surfaces to determine form, size and orientation of significant surface features. Object recognition techniques are used to identify the surface features and CMM type fitting algorithms are applied to calculate the dimensions of the features. Recipes can be created to automate the characterization and process multiple features simultaneously. The developed methodologies are integrated into a surface analysis toolbox developed in MATLAB environment. The deployment of the developed application on the web is demonstrated

    A Mini Review of Trends towards Automated and Non-Invasive Techniques for Early Detection of Lung Cancer: From Radiomics through Proteogenomics to Breathomics

    Get PDF
    Carcinoma of the Lung is one of the most common cancers in the world and the leading cause of tumor-related deaths. Less than 15% of patients survive 5 years post diagnosis due to its relatively poor prognosis. This has been ascribed to lack of effective diagnostic methods for early detection. Different medical imaging techniques such as chest radiography, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used in routine clinical practice for tumor detection. These techniques are medically unsatisfactory and inconvenient for patients due to poor diagnostic accuracy. Endobronchial biopsies are the gold standard for diagnosis but have the inherent risk of full or partial invasive procedures. Thus, diagnostic technology that uses data mining algorithms with medical image analysis, generally known as radiomics emerged. Radiomics extracts complex information from conventional radiographic images and quantitatively correlates image features with diagnostic and therapeutic outcomes. In spite of the benefits, radiomics is prone to high false positives and there is no established standard for acquisition of parameters. Further efforts towards outcome improvement led to the proteomic and genomic (proteogenomic) approach to lung cancer detection. Although proteogenomic has a diagnostic edge over traditional techniques, variations in bio-specimen and heterogeneity of lung cancer still possess a major challenge. Recent findings have established that changes normally occur in the gene or protein due to tumor growth in the lungs and this often leads to peroxidation of cell membrane that releases Volatile Organic Compounds (VOCs) through the breath of Lung Cancer patients. The comprehensive analysis of breath VOCs, which is tagged Breathomics in the literature,unveils opportunities for noninvasive biomarker discovery towards early detection. Breathomics has therefore become the current pace-setter in medical diagnostics research because of its non-invasiveness and cost effectiveness. This paper presents a mini survey of trends in early lung cancer detection from radiomics, through proteogenomic to breathomics
    corecore