10 research outputs found

    Swarm Satellites: Design, Characteristics and Applications

    No full text
    Satellite swarms are a novelty, yet promise to deliver unprecedented robustness and data-collection efficiency. They are so new in fact that even the definition of what a satellite swarm is is disputable, and consequently, the term "swarm" is used for practically any type of distributed space architecture. This thesis poses the proposed definition of a satellite swarm as "a space system consisting of many egalitarian,. spacecraft, cooperating to achieve a common global ,. goal".Methods for designing such swarms are proposed and analysed, as well as the purported robustness and reliability commonly associated with swarms. The investigations show that, like with many systems, it is possible to create a swarm that is less reliable than even a single satellite, yet it is also possible to create one that is more reliable. However, this requires a paradigm shift, as in order to achieve this goal, a satellite swarm's satellites should be built as simple as possible, and this implies without internally redundant systems. The OLFA'R (Orbiting Low Frequency Antennas for Radio astronomy) mission, studying astronomical phenomena at low frequencies* has been used as a test case throughout the thesis, and various technological hurdles required for achieving the OLFAR mission are investigated and solved. This shows that while the OLFAR swarm itself is still slightly beyond current-day technologies, it is not as far out as originally thought, and it could well serve as a prime example of a mission for which a satellite swarm not only would be beneficial, but almost imperative.Space Systems Egineerin

    On the reliability of spacecraft swarms

    No full text
    Satellite swarms, consisting of a large number of identical, miniaturized and simple satellites, are claimed to provide an implementation for specific space missions which require high reliability. However, a consistent model of how reliability and availability on mission level is linked to cost- and time-effective design of the individual swarm satellites has not yet been done. We have established a method to model how applied technology and processes for designing swarm satellites under cost and time constraints impact the system-level performance of swarms. The method is applied and discussed for a future astronomy mission using a satellite swarm. Swarm satellites are severely constrained by mass, as they have to be produced in large numbers. This generally implies that they feature drastically reduced internal redundancy. This is only acceptable when all satellites are functionally identical, and can hence take over certain tasks of a malfunctioning satellite, resulting in a graceful degradation of the system performance. This swarm feature renders it significantly flexible and robust, yet it potentially affects its reliability and system throughput. Therefore, in this paper we investigated and show how the reliability of an individual satellite transfers into the overall system reliability, and hence the associated throughput of the swarm as a whole. We generated a generic model of a simple swarm satellite, and used it in conjunction with a Markov-chain based reliability analysis to assess the impact of a failure of each of the sub-systems on the functionality of the individual satellite, as well as its impact on the functionality of the swarm. Further analysis was done using Monte-Carlo simulations. The research focussed mainly on the (useful) lifetime of the system as a whole both when considering full and partial failures of elements. This was done using the assumption that swarm elements could still function in a reduced operational state when non-critical components failed. Also, the effect of recoverable malfunctions is investigated.Space EngineeringAerospace Engineerin

    Unveiling complexity of hydrogen integration: A multi-faceted exploration of challenges in the Dutch context

    No full text
    As the transition to sustainable energy intensifies, hydrogen emerges as a pivotal medium in mitigating climate change and improving energy security. While its applicability across various sectors is undeniable, its integration into established energy systems presents multifaceted challenges. This study investigates the complexities of integrating hydrogen into the Netherlands' energy systems. Beyond technological advancements, the successful design and rollout of a hydrogen supply chain require coordination and collaboration among a myriad of stakeholders. Through a mixed-methods approach, this study combines findings from a broad literature review, policy document analyses, evaluation of 59 field projects, and engaging dialogues with 33 key stakeholders from different sectors. This investigation led to the identification and categorization of key players in the Dutch hydrogen sector, revealing their interconnected roles and the challenges encountered in the hydrogen integration process. The study further categorized the identified challenges faced by stakeholders into five core domains: technical, infrastructural (including supply chain), socioeconomic, environmental, and institutional, with associated factors. Prominent challenges include transportation infrastructure upgrades, high initial costs and scalability, effective storage methods, safety and cybersecurity measures, storage and distribution infrastructure, security of supply, and public acceptance. This study contributes to the hydrogen integration discourse, offering insights for academics, industry, and policymakers. Its detailed stakeholder analysis, holistic categorization of challenges across five domains, and a stakeholder-centric approach grounded in real-world dialogues offer applicable frameworks beyond its primary context. In this vein, it guides future research and decisions, and its approach is adaptable for different regions or sectors, emphasizing comprehensive transition strategies.Design for SustainabilityMethodologie en Organisatie van Desig

    Orbit design of a swarm for ultra-long wavelength radio interferometry with preliminary swarm and thruster sizing

    No full text
    Observing the universe in the Ultra-Long Wavelength (ULW) regime has been called the ‘last frontier in astronomy’—real imaging capabilities here are yet to be achieved. Obtaining an image of the sky in this frequency band can be done by employing a swarm of satellites that together act as an interferometer and collect the required imaging information pieces throughout the course of their operational life. Meeting the mission objective is challenging for such a swarm, since this imposes restrictions on the operational environment and the relative position and velocity vectors between the swarm elements. This work proposes an orbit solution in a Heliocentric Earth-Leading Orbit (HELO) for an autonomous CubeSat swarm with chemical thrusters. A distributed formation flying algorithm is used to aid the collection of the required imaging information pieces. Furthermore, the estimated total mission launch mass is reduced by optimising cost functions and finding favourable position and velocity at start of operational life, as well as by finding favourable thrust manoeuvre patterns. The results show that the mission objective—obtaining a 3D map of the Universe in ULW—can be achieved with 68 6U spacecraft (S/C). Moreover, the swarm can remain in a Radio Frequency Interference (RFI) quiet zone of >5 × 106 km, whilst not drifting further than ~ 6.6 × 106 km from Earth for an operational life of one year.Space Systems EgineeringAstrodynamics & Space Mission

    Fault tolerant wind turbine production operation and shutdown (Sustainable Control)

    No full text
    Extreme environmental conditions as well as system failure are real-life phenomena. Especially offshore, extreme environmental conditions and system faults are to be dealt with in an effective way. The project Sustainable Control, a new approach to operate wind turbines (Agentschap NL, grant EOSLT02013) pro- vides the concepts for an integrated control platform. This platform accomplishes fault tolerant control in regular and extreme conditions during production operation and shutdown. The platform is built up from methods for the detection of extreme conditions and faults and from methods for operation and shut-down. The detection methods are largely model-based, which implies that event detection is derived from anomalous behaviour of outcomes from an observer, which can be an Kalman fiter. Various types of control approaches are included in the control methods. Often, more scalar feedback loops work together, the validity of which is motivated through frequency separation or orhogonality. The detection and handling of extreme conditions and sensor failures elongates the operation. The application of optimizing techniques during production operation and during shut down can reduce the loads on the turbine significantly. A proof of principle on a multi MW wind turbine for optimzied production operation showed a typical reduction of fatigue damage equivalent loads between 10% and 30%.Delft Center for Systems and ControlMechanical, Maritime and Materials Engineerin

    Distributed memory parallel computing of three-dimensional variable-density groundwater flow and salt transport

    Get PDF
    Fresh groundwater reserves, being of vital importance for more than a billion of people living in the coastal zone, are being threatened by saltwater intrusion due to anthropogenic activities and climate change. High resolution three-dimensional (3D), variable-density (VD), groundwater flow and salt transport (FT) numerical models are increasingly being used to support water managers and decision makers in their strategic planning and measures for dealing with the problem of fresh water shortages. However, these computer models typically require long runtimes and large memory usage, making them impractical to use without parallelization. Here, we parallelize SEAWAT, and show that with our parallelization 3D-VD-FT modeling is now feasible for a wide range of hydrogeologists, since a) speedups of more than two orders of magnitude can be obtained as illustrated in this paper, and b) large 3D-VD-FT models are feasible with memory requirements far exceeding single machine memory.Mathematical Physic

    Match filtering approach for signal acquisition in radio-pulsar navigation

    No full text
    Pulsars with their periodic pulses and known positions are ideal beacons for navigation. The challenge, however, is the detection of the very weak pulsar signals that are submerged in noise. Radio based approaches allow the use of advanced techniques and methods for the detection and acquisition of such weak signals. In this paper, an effective signal acquisition method based on epoch folding and matched filtering is proposed that can enable pulsar navigation on spacecraft. Traditionally astronomers use an epoch folding algorithm to search for new pulsars which is a very time and processing power-consuming approach. Since a pulsar navigation system uses signals from known pulsars, advanced algorithms can reduce the time and processing power required for pulsar detection. Applying optimization methods on folding algorithms could lead to an increase in detection speed, however, it is not practical when taking all known signal parameters into account. In this paper a new approach is proposed to reduce the time and processing power further, considering a-priori knowledge such as pulse shape. This approach is based on the concept of matched filtering. Matched filtering is the basic tool for extracting known wavelets from a signal that has been contaminated by noise. A matched filter is obtained by correlating the observation with a template of a known signal, to detect its presence. Such a matched filter is the optimal linear filter for maximizing the signal-to-noise-ratio (SNR) in the presence of additive stochastic noise. After a description of the underlying theory, simulations shows that by using this method, significant increases in detection speeds are possible.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Atherosclerotic Plaque Component Segmentation in Combined Carotid MRI and CTA Data Incorporating Class Label Uncertainty

    Get PDF
    Atherosclerotic plaque composition can indicate plaque vulnerability. We segment atherosclerotic plaque components from the carotid artery on a combination of in vivo MRI and CT-angiography (CTA) data using supervised voxelwise classification. In contrast to previous studies the ground truth for training is directly obtained from 3D registration with histology for fibrous and lipid-rich necrotic tissue, and with CT for calcification. This registration does, however, not provide accurate voxelwise correspondence. We therefore evaluate three approaches that incorporate uncertainty in the ground truth used for training: I) soft labels are created by Gaussian blurring of the original binary histology segmentations to reduce weights at the boundaries between components, and are weighted by the estimated registration accuracy of the histology and in vivo imaging data (measured by overlap), II) samples are weighted by the local contour distance of the lumen and outer wall between histology and in vivo data, and III) 10% of each class is rejected by Gaussian outlier rejection. Classification was evaluated on the relative volumes (% of tissue type in the vessel wall) for calcified, fibrous and lipid-rich necrotic tissue, using linear discriminant (LDC) and support vector machine (SVM) classification. In addition, the combination of MRI and CTA data was compared to using only one imaging modality. Best results were obtained by LDC and outlier rejection: the volume error per vessel was 0.91.0% for calcification, 12.77.6% for fibrous and 12.18.1% for necrotic tissue, with Spearman rank correlation coefficients of 0.91 (calcification), 0.80 (fibrous) and 0.81 (necrotic). While segmentation using only MRI features yielded low accuracy for calcification, and segmentation using only CTA features yielded low accuracy for necrotic tissue, the combination of features from MRI and CTA gave good results for all studied components.ImPhys/Imaging PhysicsApplied Science

    A roadmap towards a space-based radio telescope for ultra-low frequency radio astronomy

    No full text
    The past two decades have witnessed a renewed interest in low frequency radio astronomy, with a particular focus on frequencies above 30 MHz e.g., LOFAR (LOw Frequency ARray) in the Netherlands and its European extension ILT, the International LOFAR Telescope. However, at frequencies below 30 MHz, Earth-based observations are limited due to a combination of severe ionospheric distortions, almost full reflection of radio waves below 10 MHz, solar eruptions and the radio frequency interference (RFI) of human-made signals. Moreover, there are interesting scientific processes which naturally occur at these low frequencies. A space or Lunar-based ultra-low-frequency (also referred to as ultra-long-wavelength, ULW) radio array would suffer significantly less from these limitations and hence would open up the last, virtually unexplored frequency domain in the electromagnetic spectrum. A roadmap has been initiated by astronomers and researchers in the Netherlands to explore the opportunity of building a swarm of satellites to observe at the frequency band below 30 MHz. This roadmap dubbed Orbiting Low Frequency Antennas for Radio Astronomy (OLFAR), a space-based ultra-low frequency radio telescope that will explore the Universe's so-called dark ages, map the interstellar medium, and study planetary and solar bursts in the solar system and search them in other planetary systems. Such a radio astronomy system will comprise of a swarm of hundreds to thousands of satellites, working together as a single aperture synthesis instrument deployed sufficiently far away from Earth to avoid terrestrial RFI. The OLFAR telescope is a novel and complex system, requiring yet to be proven engineering solutions. Therefore, a number of key technologies are still required to be developed and proven. The first step in this roadmap is the NCLE (Netherlands China Low Frequency Explorer) experiment, which was launched in May 2018 on the Chinese Chang'e 4 mission. The NCLE payload consists of a three monopole antenna system for low frequency observations, from which the first data stream is expected in the second half of 2019, which will provide important feedback for future science and technology opportunities. In this paper, the roadmap towards OLFAR, a brief overview of the science opportunities, and the technological and programmatic challenges of the mission are presented.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Circuits and SystemsElectronicsSpace EngineeringAstrodynamics & Space Mission

    Measurement of the diffractive cross-section in deep inelastic scattering

    Get PDF
    Diffractive scattering of γ∗p→X+N\gamma^* p \to X + N, where NN is either a proton or a nucleonic system with MN < 4M_N~<~4~GeV has been measured in deep inelastic scattering (DIS) at HERA. The cross section was determined by a novel method as a function of the γ∗p\gamma^* p c.m. energy WW between 60 and 245~GeV and of the mass MXM_X of the system XX up to 15~GeV at average Q2Q^2 values of 14 and 31~GeV2^2. The diffractive cross section dσdiff/dMXd\sigma^{diff} /dM_X is, within errors, found to rise linearly with WW. Parameterizing the WW dependence by the form d\sigma^{diff}/dM_X \propto (W^2)^{(2\overline{\mbox{\alpha_{_{I\hspace{-0.2em}P}}}} -2)} the DIS data yield for the pomeron trajectory \overline{\mbox{\alpha_{_{I\hspace{-0.2em}P}}}} = 1.23 \pm 0.02(stat) \pm 0.04 (syst) averaged over tt in the measured kinematic range assuming the longitudinal photon contribution to be zero. This value for the pomeron trajectory is substantially larger than \overline{\mbox{\alpha_{_{I\hspace{-0.2em}P}}}} extracted from soft interactions. The value of \overline{\mbox{\alpha_{_{I\hspace{-0.2em}P}}}} measured in this analysis suggests that a substantial part of the diffractive DIS cross section originates from processes which can be described by perturbative QCD. From the measured diffractive cross sections the diffractive structure function of the proton F^{D(3)}_2(\beta,Q^2, \mbox{x_{_{I\hspace{-0.2em}P}}}) has been determined, where β\beta is the momentum fraction of the struck quark in the pomeron. The form F^{D(3)}_2 = constant \cdot (1/ \mbox{x_{_{I\hspace{-0.2em}P}}})^a gives a good fit to the data in all β\beta and Q2Q^2 intervals with $a = 1.46 \pm 0.04 (stat) \pmComment: 45 pages, including 16 figure
    corecore