59 research outputs found
Recovering the Imperfect: Cell Segmentation in the Presence of Dynamically Localized Proteins
Deploying off-the-shelf segmentation networks on biomedical data has become
common practice, yet if structures of interest in an image sequence are visible
only temporarily, existing frame-by-frame methods fail. In this paper, we
provide a solution to segmentation of imperfect data through time based on
temporal propagation and uncertainty estimation. We integrate uncertainty
estimation into Mask R-CNN network and propagate motion-corrected segmentation
masks from frames with low uncertainty to those frames with high uncertainty to
handle temporary loss of signal for segmentation. We demonstrate the value of
this approach over frame-by-frame segmentation and regular temporal propagation
on data from human embryonic kidney (HEK293T) cells transiently transfected
with a fluorescent protein that moves in and out of the nucleus over time. The
method presented here will empower microscopic experiments aimed at
understanding molecular and cellular function.Comment: Accepted at MICCAI Workshop on Medical Image Learning with Less
Labels and Imperfect Data, 202
Enhancement of Image and Video using In-Painting Technique
A comprehensive image in-painting method was proposed to enhance the two critical task in the prior hybrid method, which are setting up the best application order for in-painting textural and structural missing regions and extracting the sub-image containing best candidates of source patches to be used to fill in a missing region. By integrating our ‘execution-order analysis based solution ‘to task one and our image ‘context –driven source image extraction solution ‘to task two. We were able to consistently improve in-painting quality compared with that of the previous non-hybrid in-painting method while even spending much shorter processing time compared with the conventional hybrid in-painting methods. Image in-painting is process of restoring or removing object in an image. The basic task is to fill the surrounding information to inner sides. This technique boost numerous application like restoring or removing degraded part in image , text removal , stamp or symbol removal and disocclusion in image based rendering (IBR). The problem definition in image in-painting is that it is ill-posed inverse problem. It means that there is no well-defined particular technique. Image in-painting techniques are broadly categorized in two types. First, texture based in-painting and another is the structure based in-painting. The main motivations related to this technique are that in-painting results are degraded for images with combination of texture and structure features. Another motivation is that it consumes more computation time. The working principle of image in-painting is that assumption of pixels in the known and unknown parts of image that share the similar statistical and geometrical structure. In past literature, diffusion-based in-painting was introduced that are best suited for straight line, parabolic curve and for small region. The main drawback of diffusion method is that are not work on unconnected edges and also produces Gradient Reversal artifacts after restoration. With advancement of technology, sparse based in-painting and examplar based in-painting are considering for eliminating problem.In this digest, sparse based in-painting is introduced on basis of discrete wavelet transform technique based on finding the region pixel, calculating pixel priority and normalizing the in-paint region boundary.An image can be mathematically represented as [1] =⊂ → → (), Where x is a vector indicating spatial-domain pixel, which in the case of gray scale image (n=2) and is defined as x = (x, y). For color image (m=3) and is defined in (R, G, B) color space. The goal of image in-painting is to calculate the (R, G, B) components of the pixels situated at position x in the unspecified region U, from the pixels situated in the known region S, to finally form the in-painted image. The purpose in term of quality is that reconstructed part seems natural to human naked eye, and is physically imaginable as possible
A Specification For A Next Generation Cad Toolkit For Electronics Product Design
Electronic engineering product design is a complex process which has enjoyed an
increasing provision of computer based tools since the early 1980's. Over this period
computer aided design tool development has progressed at such a pace that new features
and functions have tended to be market driven. As such CAD tools have not been developed
through the recommended practise of defining a functional specification prior to any
software code generation.
This thesis defines a new functional specification for next generation CAD tools to support
the electronics product design process. It is synthesized from a review of the use of
computers in the electronics product design process, from a case study of Best Practices
prevalent in a wide range of electronics companies and from a new model of the design
process. The model and the best practices have given rise to a new concept for company
engineering documentation, the Product Book which provides a logical framework for
constraining CAD tools and their users (designers) as means of controlling costs in the
design process.
This specification differs from current perceptions of computer functionality in the CAD
tool industry by addressing human needs together with company needs of computer
supported design, rather than just providing more technological support for the designer in
isolation.Racal Reda
Recommended from our members
An Evaluation of Performance Enhancements to Particle Swarm Optimisation on Real-World Data
Swarm Computation is a relatively new optimisation paradigm. The basic premise is to model the collective behaviour of self-organised natural phenomena such as swarms, flocks and shoals, in order to solve optimisation problems. Particle Swarm Optimisation (PSO) is a type of swarm computation inspired by bird flocks or swarms of bees by modelling their collective social influence as they search for optimal solutions.
In many real-world applications of PSO, the algorithm is used as a data pre-processor for a neural network or similar post processing system, and is often extensively modified to suit the application. The thesis introduces techniques that allow unmodified PSO to be applied successfully to a range of problems, specifically three extensions to the basic PSO algorithm: solving optimisation problems by training a hyperspatial matrix, using a hierarchy of swarms to coordinate optimisation on several data sets simultaneously, and dynamic neighbourhood selection in swarms.
Rather than working directly with candidate solutions to an optimisation problem, the PSO algorithm is adapted to train a matrix of weights, to produce a solution to the problem from the inputs. The search space is abstracted from the problem data.
A single PSO swarm optimises a single data set and has difficulties where the data set comprises disjoint parts (such as time series data for different days). To address this problem, we introduce a hierarchy of swarms, where each child swarm optimises one section of the data set whose gbest particle is a member of the swarm above in the hierarchy. The parent swarm(s) coordinate their children and encourage more exploration of the solution space. We show that hierarchical swarms of this type perform better than single swarm PSO optimisers on the disjoint data sets used.
PSO relies on interaction between particles within a neighbourhood to find good solutions. In many PSO variants, possible interactions are arbitrary and fixed on initialisation. Our third contribution is a dynamic neighbourhood selection: particles can modify their neighbourhood, based on the success of the candidate neighbour particle. As PSO is intended to reflect the social interaction of agents, this change significantly increases the ability of the swarm to find optimal solutions. Applied to real-world medical and cosmological data, this modification is and shows improvements over standard PSO approaches with fixed neighbourhoods
The Americanization of South Africa
African Studies Seminar series. Paper presented 19 October, 1998The title of this paper comes from a 1901 book by W.T. Stead, entitled The Americanisation
of the World. A British reformer and editor of the London-based Review of Reviews, Stead is
perhaps best known to historians as the author of If Christ Came to Chicago, one of the era's most
celebrated exposes of urban vice. Fewer may realize that Stead spent several years in the 1890s
in South Africa, where he was a close confidante of Cecil John Rhodes. Exposure to South Africa
played a germinal role in The Americanisation of the World, in which Stead argued that the
United States was destined to displace Great Britain as the world's pre-eminent political,
economic and cultural power. In contrast to contemporaries such as F. A. McKenzie, whose 1902
book, The American Invaders, urged action against the "armies of American entrepreneurs
conquering British markets," Stead saw the United States' global expansion as irresistable. The
choice for Britain's rulers was whether to defy the inevitable and thereby to consign themselves
to global irrelevance, or to accept the majority of their one-time colony, forging an Anglo-
American commonwealth that would secure for all time the primacy of the virile Anglo-Saxon
rac
Towards outlier detection for high-dimensional data streams using projected outlier analysis strategy
[Abstract]: Outlier detection is an important research problem in data mining that aims to discover useful abnormal and irregular patterns hidden in large data sets. Most existing outlier detection methods only deal with static data with relatively low dimensionality.
Recently, outlier detection for high-dimensional stream data became a new emerging research problem. A key observation that motivates this research is that outliers
in high-dimensional data are projected outliers, i.e., they are embedded in lower-dimensional subspaces. Detecting projected outliers from high-dimensional stream
data is a very challenging task for several reasons. First, detecting projected outliers is difficult even for high-dimensional static data. The exhaustive search for the out-lying subspaces where projected outliers are embedded is a NP problem. Second, the algorithms for handling data streams are constrained to take only one pass to process the streaming data with the conditions of space limitation and time criticality. The currently existing methods for outlier detection are found to be ineffective for detecting projected outliers in high-dimensional data streams.
In this thesis, we present a new technique, called the Stream Project Outlier deTector (SPOT), which attempts to detect projected outliers in high-dimensional
data streams. SPOT employs an innovative window-based time model in capturing dynamic statistics from stream data, and a novel data structure containing a set of
top sparse subspaces to detect projected outliers effectively. SPOT also employs a multi-objective genetic algorithm as an effective search method for finding the
outlying subspaces where most projected outliers are embedded. The experimental results demonstrate that SPOT is efficient and effective in detecting projected outliers
for high-dimensional data streams. The main contribution of this thesis is that it provides a backbone in tackling the challenging problem of outlier detection for high-
dimensional data streams. SPOT can facilitate the discovery of useful abnormal patterns and can be potentially applied to a variety of high demand applications, such as for sensor network data monitoring, online transaction protection, etc
Cooperative control of autonomous connected vehicles from a Networked Control perspective: Theory and experimental validation
Formation control of autonomous connected vehicles is one of the typical problems addressed in the general context of networked control systems. By leveraging this paradigm, a platoon composed by multiple connected and automated vehicles is represented as one-dimensional network of dynamical agents, in which each agent only uses its neighboring information to locally control its motion, while it aims to achieve certain global coordination with all other agents. Within this theoretical framework, control algorithms are traditionally designed based on an implicit assumption of unlimited bandwidth and perfect communication environments. However, in practice, wireless communication networks, enabling the cooperative driving applications, introduce unavoidable communication impairments such as transmission delay and packet losses that strongly affect the performances of cooperative driving. Moreover, in addition to this problem, wireless communication networks can suffer different security threats. The challenge in the control field is hence to design cooperative control algorithms that are robust to communication impairments and resilient to cyber attacks. The work aim is to tackle and solve these challenges by proposing different properly designed control strategies. They are validated both in analytical, numerical and experimental ways. Obtained results confirm the effectiveness of the strategies in coping with communication impairments and security vulnerabilities
The Role of the Syllable Contact Law-Semisyllable (SCL-SEMI) in the Coda Clusters of Najdi Arabic and Other Languages
Final consonants in Arabic are semisyllables; that is, moraic unsyllabified segments that are attached to the prosodic word (Kiparsky, 2003). If this is the case, optional vowel epenthesis in Najdi Arabic final clusters cannot be attributed to violations of the Sonority Sequencing Principle, because sonority restrictions apply within syllables only. In a new perspective, this dissertation argues that the existence of vowel epenthesis in Najdi coda clusters that have rising sonority, and its absence in clusters that have a falling sonority, are instead due to violations of the Syllable Contact Law (SCL), where sonority must drop between syllable codas and the following onset. It specifically argues that SCL is further divided into two sub-constraints where it not only applies across two syllables (SCL-SYLL), but also across syllables and the following unsyllabified semisyllable (SCL-SEMI). The new constraint SCL-SEMI is shown to be operative in other languages and dialects of Arabic, as well, including German, Slovak, English and Jordanian Arabic. Optionality of vowel epenthesis when words are produced in isolation vs. followed by a vowel-initial suffix is accounted for by adopting the Reversible Ranking Strategy introduced by Lee (2001) where the two constraints DEP-IO and SCL-SEMI are reversed following this ranking: *CCC, MAX-IO, ONSET \u3e\u3e ALIGNR\u3e\u3e DEP-IO, SCL-SEMI \u3e\u3e SCL-SYLL, *CXCOD. In addition, a psycholinguistic study is conducted to test the perception and production of ten Najdi speakers to observe whether they epenthesized a vowel into nonsense words with final rising-sonority clusters. It also investigates the generalizability of the semisyllable consistutent, by asking whether Najdi listeners will assign semisyllable status to any unsyllabifiable consonant, even those occurring in nonsense words. Results show that most participants apply their preferred vowel epenthesis pattern to nonesense words, which reflects their implicit knowledge of this pattern. Results also show a harmony effect where inserted vowels copy stem vowels
- …