1,492 research outputs found

    Anomaly activity classification in the grocery stores

    Get PDF
    Nowadays, because of the growing number of robberies in shopping malls and grocery stores, automatic camera’s applications are vital necessities to detect anomalous actions. These events usually happen quickly and unexpectedly. Therefore, having a robust system which can classify anomalies in a real-time with minimum false alarms is required. Due to this needs, the main objective of this project is to classify anomalies which may happen in grocery stores. This objective is acquired by considering properties, such as; using one fixed camera in the store and the presence of at least one person in the camera view. The actions of human upper body are used to determine the anomalies. Articulated motion model is used as the basis of the anomalies classification design. In the design, the process starts with feature extraction and followed by target model establishment, tracking and action classification. The features such as color and image gradient built the template as the target model. Then, the models of different upper body parts are tracked during consecutive frames by the tracking method which is sum of square differences (SSD) combined with the Kalman filter as the predictor. The spatio-temporal information as the trajectory of limbs gained by tracking part is sent to proposed classification part. For classification, three different scenarios are studied: attacking cash machine, cashier’s attacking and making the store messy. In implementing these scenarios, some events were introduced. These events are; basic (static) events which are the static objects in the scene, spatial events which are those actions depend on coordinates of body parts and spatio-temporal events in which these actions are tracked in consecutive frames. At last, if one of the scenarios happens, an anomalous action will be detected. The results show the robustness of the proposed methods which have the minimum false positive error of 7% for the cash machine attack and minimum false negative error of 19% for the cashier’s attacking scenario

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Legal Education During the COVID-19 Pandemic: Put Health, Safety and Equity First

    Get PDF
    The COVID-19 viral pandemic exposed equity and safety culture gaps in American legal education. Legal education forms part of America’s Critical Infrastructure whose continuity is important to the economy, public safety, democracy, and the national security of the United States. To address the COVID-19 pandemic and prepare for future viral pandemics and safety risks, this article recommends law schools develop a safety culture to foster health, safety, robust educational dialogue, and equity. To guide safety-and-equity-centered decision-making and promote effective legal education during and following the COVID-19 pandemic, this article contends legal education must put health, safety, and equity first. It proposes an ethical framework for legal education that centers diversity and inclusion as the foundation of robust educational dialogue. This article’s interdisciplinary analysis of COVID-19 scientific studies recommends law schools follow the science and exercise extreme caution before convening classes in person or in a hybrid fashion. COVID-19 infection risks serious illness, long-lasting complications, and death. It has preyed on America’s inequities. African-Americans, Native Americans, Latinx Americans, older Americans, and those with certain underlying health conditions including pregnant women face higher levels of hospitalization and death from COVID-19 infection. COVID-19’s inequitable risks may separate those participating in class in person, or online, by race, ethnicity, tribe, age, and health. Law schools must ensure that during the COVID-19 health emergency, hybrid or in-person pedagogical models do not undermine diversity and inclusion that supports educational dialogue and First Amendment values. The COVID-19 pandemic underscores the imperative of putting health, safety, and equity first in legal education

    A Field Guide to Genetic Programming

    Get PDF
    xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction -- Representation, initialisation and operators in Tree-based GP -- Getting ready to run genetic programming -- Example genetic programming run -- Alternative initialisations and operators in Tree-based GP -- Modular, grammatical and developmental Tree-based GP -- Linear and graph genetic programming -- Probalistic genetic programming -- Multi-objective genetic programming -- Fast and distributed genetic programming -- GP theory and its applications -- Applications -- Troubleshooting GP -- Conclusions.Contents xi 1 Introduction 1.1 Genetic Programming in a Nutshell 1.2 Getting Started 1.3 Prerequisites 1.4 Overview of this Field Guide I Basics 2 Representation, Initialisation and GP 2.1 Representation 2.2 Initialising the Population 2.3 Selection 2.4 Recombination and Mutation Operators in Tree-based 3 Getting Ready to Run Genetic Programming 19 3.1 Step 1: Terminal Set 19 3.2 Step 2: Function Set 20 3.2.1 Closure 21 3.2.2 Sufficiency 23 3.2.3 Evolving Structures other than Programs 23 3.3 Step 3: Fitness Function 24 3.4 Step 4: GP Parameters 26 3.5 Step 5: Termination and solution designation 27 4 Example Genetic Programming Run 4.1 Preparatory Steps 29 4.2 Step-by-Step Sample Run 31 4.2.1 Initialisation 31 4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming 5 Alternative Initialisations and Operators in 5.1 Constructing the Initial Population 5.1.1 Uniform Initialisation 5.1.2 Initialisation may Affect Bloat 5.1.3 Seeding 5.2 GP Mutation 5.2.1 Is Mutation Necessary? 5.2.2 Mutation Cookbook 5.3 GP Crossover 5.4 Other Techniques 32 5.5 Tree-based GP 39 6 Modular, Grammatical and Developmental Tree-based GP 47 6.1 Evolving Modular and Hierarchical Structures 47 6.1.1 Automatically Defined Functions 48 6.1.2 Program Architecture and Architecture-Altering 50 6.2 Constraining Structures 51 6.2.1 Enforcing Particular Structures 52 6.2.2 Strongly Typed GP 52 6.2.3 Grammar-based Constraints 53 6.2.4 Constraints and Bias 55 6.3 Developmental Genetic Programming 57 6.4 Strongly Typed Autoconstructive GP with PushGP 59 7 Linear and Graph Genetic Programming 61 7.1 Linear Genetic Programming 61 7.1.1 Motivations 61 7.1.2 Linear GP Representations 62 7.1.3 Linear GP Operators 64 7.2 Graph-Based Genetic Programming 65 7.2.1 Parallel Distributed GP (PDGP) 65 7.2.2 PADO 67 7.2.3 Cartesian GP 67 7.2.4 Evolving Parallel Programs using Indirect Encodings 68 8 Probabilistic Genetic Programming 8.1 Estimation of Distribution Algorithms 69 8.2 Pure EDA GP 71 8.3 Mixing Grammars and Probabilities 74 9 Multi-objective Genetic Programming 75 9.1 Combining Multiple Objectives into a Scalar Fitness Function 75 9.2 Keeping the Objectives Separate 76 9.2.1 Multi-objective Bloat and Complexity Control 77 9.2.2 Other Objectives 78 9.2.3 Non-Pareto Criteria 80 9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80 9.4 Multi-objective Optimisation via Operator Bias 81 10 Fast and Distributed Genetic Programming 83 10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83 10.2 Reducing Cost of Fitness with Caches 86 10.3 Parallel and Distributed GP are Not Equivalent 88 10.4 Running GP on Parallel Hardware 89 10.4.1 Master–slave GP 89 10.4.2 GP Running on GPUs 90 10.4.3 GP on FPGAs 92 10.4.4 Sub-machine-code GP 93 10.5 Geographically Distributed GP 93 11 GP Theory and its Applications 97 11.1 Mathematical Models 98 11.2 Search Spaces 99 11.3 Bloat 101 11.3.1 Bloat in Theory 101 11.3.2 Bloat Control in Practice 104 III Practical Genetic Programming 12 Applications 12.1 Where GP has Done Well 12.2 Curve Fitting, Data Modelling and Symbolic Regression 12.3 Human Competitive Results – the Humies 12.4 Image and Signal Processing 12.5 Financial Trading, Time Series, and Economic Modelling 12.6 Industrial Process Control 12.7 Medicine, Biology and Bioinformatics 12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii 12.9 Entertainment and Computer Games 127 12.10The Arts 127 12.11Compression 128 13 Troubleshooting GP 13.1 Is there a Bug in the Code? 13.2 Can you Trust your Results? 13.3 There are No Silver Bullets 13.4 Small Changes can have Big Effects 13.5 Big Changes can have No Effect 13.6 Study your Populations 13.7 Encourage Diversity 13.8 Embrace Approximation 13.9 Control Bloat 13.10 Checkpoint Results 13.11 Report Well 13.12 Convince your Customers 14 Conclusions Tricks of the Trade A Resources A.1 Key Books A.2 Key Journals A.3 Key International Meetings A.4 GP Implementations A.5 On-Line Resources 145 B TinyGP 151 B.1 Overview of TinyGP 151 B.2 Input Data Files for TinyGP 153 B.3 Source Code 154 B.4 Compiling and Running TinyGP 162 Bibliography 167 Inde

    MEMS Technology for Biomedical Imaging Applications

    Get PDF
    Biomedical imaging is the key technique and process to create informative images of the human body or other organic structures for clinical purposes or medical science. Micro-electro-mechanical systems (MEMS) technology has demonstrated enormous potential in biomedical imaging applications due to its outstanding advantages of, for instance, miniaturization, high speed, higher resolution, and convenience of batch fabrication. There are many advancements and breakthroughs developing in the academic community, and there are a few challenges raised accordingly upon the designs, structures, fabrication, integration, and applications of MEMS for all kinds of biomedical imaging. This Special Issue aims to collate and showcase research papers, short commutations, perspectives, and insightful review articles from esteemed colleagues that demonstrate: (1) original works on the topic of MEMS components or devices based on various kinds of mechanisms for biomedical imaging; and (2) new developments and potentials of applying MEMS technology of any kind in biomedical imaging. The objective of this special session is to provide insightful information regarding the technological advancements for the researchers in the community

    Annual Report of the University, 2001-2002, Volumes 1-4

    Get PDF
    VITAL ACADEMIC CLIMATE* by Brian Foster, Provost/Vice President of Academic Affairs A great university engages students and faculty fully in important ideas and issues ... not just to learn about them, but to take them apart and put them back together, to debate, deconstruct, resist, reconstruct and build upon them. Engagement of this sort takes concentration and commitment, and it produces the kind of discipline and passion that leads to student and faculty success and satisfaction in their studies, research, performance, artistic activity and service. It is also the kind of activity that creates a solid, nurturing spirit of community. This is what we mean when we talk about a vital academic climate. We are striving for an environment that will enrich the social, cultural and intellectual lives of all who come in contact with the University. Many things interconnect to make this happen: curriculum, co-curricular activities, conferences, symposia, cultural events, community service, research and social activity. Our goal is to create the highest possible level of academic commitment and excitement at UNM. This is what characterizes a truly great university. *Strategic Direction 2 New Mexico native Andres C. Salazar, a Ph.D. in electrical engineering from Michigan State University, has been named the PNM Chair in Microsystems, Commercialization and Technology. Carrying the title of professor, the PNM Chair is a joint appointment between the School of Engineering and the Anderson Schools of Management. Spring 2002 graduate John Probasco was selected a 2002 Rhodes Scholar, the second UNM student to be so honored in the past four years. The biochemistry major from Alamogordo previously had been awarded the Goldwater Scholarship and the Truman Scholarship. Andres c. Salazar Biology student Sophie Peterson of Albuquerque was one of 30 students nationwide to receive a 2002-2003 Award of Excellence from Phi Kappa Phi, the oldest and largest national honor society. Regents\\u27 Professor of Communication and Journalism Everett M. Rogers was selected the University\\u27s 4 71h Annual Research Lecturer, the highest honor UNM bestows upon members of its faculty. John Probasco honored by Student Activities Director Debbie Morris. New Mexico resident, author and poet Simon}. Ortiz received an Honorary Doctorate of Letters at Spring Commencement ceremonies. Child advocate Angela Angie Vachio, founder and executive director of Peanut Butter and Jelly Family Services, Inc., was awarded an Honorary Doctorate of Humane Letters. American Studies Assistant Professor Amanda}. Cobb won the 22 d annual American Book Award for listening to Our Grandmothers\\u27 Stories: The Bloomfield Academy for Chickasaw Females, 1852-1949

    Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research. In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques. In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images

    Video Quality Pooling Adaptive to Perceptual Distortion Severity

    Full text link
    • …
    corecore