Modes of the Dark Ages 21 cm field accessible to a lunar radio interferometer
Abstract
At redshifts beyond , the 21 cm line from neutral hydrogen is expected to be essentially the only viable probe of the three-dimensional matter distribution. The lunar far-side is an extremely appealing site for future radio arrays that target this signal, as it is protected from terrestrial radio frequency interference, and has no ionosphere to attenuate and absorb radio emission at low frequencies (tens of MHz and below). We forecast the sensitivity of low-frequency lunar radio arrays to the bispectrum of the 21 cm brightness temperature field, which can in turn be used to probe primordial non-Gaussianity generated by particular early universe models. We account for the loss of particular regions of Fourier space due to instrumental limitations and systematic effects, and predict the sensitivity of different representative array designs to local-type non-Gaussianity in the bispectrum, parametrized by . Under the most optimistic assumption of sample variance-limited observations, we find that could be achieved for several broad redshift bins at if foregrounds can be removed effectively. These values degrade to between and 0.7 for to , respectively, when a large foreground wedge region is excluded.
This article is part of a discussion meeting issue ‘Astronomy from the Moon: the next decades (part 2)’.
1. Introduction
The cosmic Dark Ages, roughly corresponding to the redshift range [1,2], constitute one of the last great frontiers of observational astronomy. This period is sandwiched between the recombination era, when the baryonic content of the Universe became electrically neutral for the first time (), and the Cosmic Dawn (), when the first stars and galaxies formed and began to reionize the neutral intergalactic medium (IGM).
The large lookback time and lack of luminous sources from this period make any form of direct observation of the Dark Ages challenging. The baryonic matter at this time comes predominantly in the form of neutral Hydrogen (HI) however, which has a distinctive spectral line deep in the radio part of the spectrum, at a wavelength of 21.1 cm (1420.4 MHz) [3]. By the time of observation today, this has been strongly redshifted into the low-frequency end of the radio spectrum, at tens of MHz and below. This is quite fortunate, as the late Universe is optically thin to these frequencies. Radio emission from the Dark Ages can travel, effectively unimpeded, from its time of emission until the present-day, making it a highly promising probe of this otherwise ‘dark’ epoch [3].
Immediately following recombination at , fluctuations in the baryonic matter density, including the neutral hydrogen gas, had a different amplitude and scale dependence compared with the dominant cold dark matter (CDM) component [4]. Over time, the baryon distribution evolved to match the CDM distribution on sub-horizon scales, down to around the Jeans scale (), where baryon pressure effects take over [4,5]. By measuring the statistical clustering properties of the baryons, we can therefore infer how the dark matter is clustered, which in turn can be related back to the cosmic initial conditions set during the inflationary epoch.
The three-dimensional spatial distribution of the neutral hydrogen is not observed directly however. The actual observable quantity is the intensity, or equivalently the brightness temperature, , of the HI as a function of angle on the sky and radio frequency. The frequency can be mapped to an observed redshift, which allows us to reconstruct a three-dimensional map of the HI as it is projected onto our past lightcone, i.e. the frequency dimension captures the evolution of the brightness temperature field in both time and comoving distance from us. The peculiar motions of the gas also contribute a Doppler shift, further distorting the distribution in the frequency direction [3]. These effects are expected to be relatively mild, and predictable.
Dark matter clustering can also be studied using large-scale structure probes at later times. The tracer populations used there are expected to have a complex relationship with the dark matter distribution that they are embedded in however—the connection between galaxies and the dark matter field is difficult to model, even in light of modern hydrodynamical simulations, and so contributes substantial theoretical uncertainties into the interpretation of the observed clustering [6]. This is particularly the case on smaller scales, of order Mpc and below, where dark matter has assembled into collapsed halo objects that are populated by different types of galaxies according to complex non-linear galaxy formation and feedback processes [7]. The effects of nonlinear gravitational collapse influence increasingly large scales as time goes on, breaking the connection between the observed clustering of matter and the primordial fluctuations that initially seeded it.
The picture is somewhat simpler in the Dark Ages. Nonlinear collapse is confined to much smaller scales—large collapsed objects have not yet had time to form—and there are not yet any galaxies to participate in complicated formation and evolution processes. Nor do we need to worry about complicated and uncertain radiative processes that affect the ionisation properties of the gas during Cosmic Dawn and the subsequent Epoch of Reionization (EoR) between . The 21 cm brightness temperature is not simply a linear tracer of the baryon density or CDM density however. The local brightness temperature fluctuation depends on the thermal state of the gas (via the spin temperature, ) as well as on the local HI density, and is also modulated by an optical depth term that depends on the line-of-sight peculiar velocity [4,5]. While these terms and their relation to the underlying baryon and CDM fluctuations, and , can be calculated analytically, the mapping between them necessarily includes terms beyond linear order.
For cosmological interpretation, the quantities of interest are not the three-dimensional matter field itself, but its statistical properties, as these can be predicted theoretically, regardless of the particular statistical realization of the field that we observe. Existing observational constraints, e.g. from cosmic microwave background (CMB) experiments, point towards a cosmic matter distribution that is approximately statistically homogeneous, isotropic, and close to Gaussian-distributed [8]. The implication of these properties is that, to a very good approximation, the statistics of the matter distribution are fully described by the power spectrum of the matter density fluctuations, i.e. their 2-point function in Fourier space, defined by
A useful probe of non-Gaussianity is the bispectrum, i.e. the 3-point function of the matter density field in Fourier space, which vanishes for a perfectly Gaussian field. Different types of primordial model predict different amplitudes and shapes of the bispectrum [10]. The most commonly studied is local-type non-Gaussianity, which gives rise to a bispectrum that is largest for ‘squeezed’ 3-point configurations (i.e. where the triangle formed by the three vectors of the 3-point function are elongated isosceles triangles, ). The amplitude of this bispectrum shape is typically parametrized by , and current constraints from the CMB place an upper limit of approximately [11]. Depending on the inflation model, one can obtain values of (for particular multi-field models for instance), down to a strong prediction of for single field models [12,13]. Other bispectrum shapes can also be obtained and their amplitudes constrained [14,15].
The primary CMB temperature fluctuations have been measured well enough that no further (substantial) improvement in bispectrum constraints can be made from this source—the constraints are now dominated by ‘cosmic variance’, which is the intrinsic uncertainty due to having only a finite number of samples of a given quantity (in this case, a finite number of observable Fourier modes in the observed CMB). Late-time large-scale structure probes measure a larger number of Fourier modes than the CMB, and so can be used to further improve constraints and push closer to the level required to test some types of inflationary model, albeit with the difficulty of needing to account for nonlinearity and astrophysical modelling uncertainty [12,16–19].
Pushing to the level requires vastly more Fourier modes, however [20]. This is where Dark Ages 21 cm mapping experiments have a crucial role to play. There are very many more modes available from extending to higher wavenumbers than are reasonably accessible to low-redshift surveys, as the number of Fourier modes in a survey volume scales like , where is the maximum recoverable Fourier wavenumber. Furthermore, the non-primordial contributions to the Dark Ages 21 cm bispectrum, such as those due to the non-linear mapping between the 21 cm signal and the underlying CDM field, can be calculated analytically, with comparatively fewer theoretical uncertainties in how to model them.
In this article, we examine some of the practical challenges associated with observing the bispectrum of 21 cm brightness temperature fluctuations from the cosmic Dark Ages. The highest redshifts (and a large fraction of the Fourier modes) can only be accessed by radio telescopes outside the Earth’s atmosphere, due to the scattering effect of the ionosphere on low-frequency radio waves, while radio frequency interference from around the Earth is also challenging to identify and remove at the relevant frequencies. Hence, there have been a number of proposals to build a Dark Ages 21 cm instrument on the far side of the Moon, or in lunar orbit.
We use the technique of Fisher forecasting to obtain simplified predictions for how well different array configurations should be able to measure the HI bispectrum. We incorporate the effect of losing modes to systematic effects such as radio foreground emission in our forecasts, as well as the intrinsic limitations due to instrumental resolution.
2. Lunar radio array configurations
Leaving matters such as deployment, power, data transmission and array calibration aside, the main properties that determine the observing characteristics of a radio interferometer array are its frequency range (bandwidth) and spectral resolution, the field of view of individual receiving elements (also called the primary beam), and the distribution of baselines, i.e. the number of available correlated antenna pairs as a function of the length and orientation of the separation vector between them. All of these terms are represented in the noise power spectrum, (see equation (3.11) below). In this section, we attempt to distill the various lunar arrays that have been proposed into a handful of model configurations that we can then use to produce representative forecasts.
(a) Fourier-space sampling function
First, we briefly review the connection between the instrumental configuration, including the baseline distribution, and the set of Fourier modes on the sky that can be observed by the interferometer and thus included in the bispectrum measurements. An illustration for some representative instrumental properties is shown in figure 1.
Figure 1. ‘Exclusion’ plot showing which cylindrical Fourier modes () are observable with an interferometer for four representative redshifts. Shaded regions would be excluded from the observations due to various observational effects: the foreground wedge (grey, diagonal); the frequency channel width (grey-blue, top); the fractional bandwidth of the observation (magenta, bottom); and the minimum and maximum baseline lengths (blue, left and yellow, right, respectively). We have assumed representative numbers here: 10 kHz frequency channels, a 30% effective bandwidth at each redshift, and minimum and maximum baselines of 10 m and 10 km, respectively.
(i) Baseline distribution
The baseline distribution can be represented as a number density of baselines in the -plane. Vectors in this plane can be mapped into transverse Fourier vectors by (where is the comoving distance to redshift ), assuming that has been defined in a plane on the sky that is parallel to the plane of the interferometer array, which we take to be perfectly flat, tangent to the lunar surface, with a normal that points at the zenith. Under these assumptions, the -plane baseline distribution can be calculated by finding the vector corresponding to each baseline, where is the separation vector between the antennae. The set of baseline vectors can then be binned appropriately in the -plane to obtain the number density . Note that the -plane distribution changes with observing wavelength since , and so will also vary with redshift.
In the flat-sky limit, each baseline can be thought of as being sensitive to the amplitude (in brightness temperature) of a single Fourier mode on the sky with wavevector . The field of view of the antennae acts as a window function on the set of Fourier modes that the array is sensitive to however, and so introduces a correlation scale in the -plane of approximately . This scale can be used as the minimum bin width for the baseline number density distribution, and also sets a lower limit on the minimum recoverable mode (i.e. maximum recoverable angular scale).
Importantly, only Fourier modes on the sky represented by baselines that are present in the array can be recovered. Unrepresented Fourier modes are not measured, and therefore cannot be used in the bispectrum measurements, etc. In Earth-based interferometry applications, it is common to use the rotation of the baseline vectors with respect to the sky, due to the Earth’s diurnal rotation, to perform rotation synthesis. As the Earth rotates, each baseline migrates along an elliptical track in the -plane as its orientation and projection of the baseline changes with respect to a reference point on the sky. This allows a wider range of Fourier modes to be recovered by each baseline as the rotation progresses.
For lunar applications, the situation is different, as there is no diurnal rotation. The Moon orbits the Earth during the lunar month, and the Earth orbits the Sun, so observations taken across a terrestrial year will allow some degree of rotation synthesis to be achieved [21]. Different patches of the sky will also rise and set throughout the month. Since the Sun is a bright radio source, observations would normally be taken during the lunar night however, and so this prevents some segments of the tracks in the -plane from being recovered.
(ii) Foreground wedge
Radio interferometer visibilities measure the integrated sky intensity distribution after it has been modulated by a baseline-dependent fringe pattern. To a good approximation, the fringe pattern for each baseline can be mapped to a transverse Fourier mode, , at each frequency. The sky intensity distribution is also modulated by the primary beam pattern of the antennae however, which also depend on frequency, but in a different way. When a Fourier transform is performed in the frequency direction, i.e. to give the visibility in terms of radial Fourier mode, , this additional modulation gives rise to a coupling between transverse and radial modes of the sky signal. This scatters intrinsically spectrally-smooth sky emission (i.e. the foregrounds) at low- into an extended wedge-shaped region in () space. The maximum extent of the wedge region is in principle related to the maximum geometric delay in arrival time of a wavefront between the two antennae of a baseline, which occurs when a source is on the horizon. This ‘worst case’ foreground-contaminated region is referred to as the ‘horizon wedge’.
The intensity of the contamination can vary within the wedge, depending on the shape of the primary beam pattern—for dipole antennae the wedge is quite evenly contaminated, for instance, while for parabolic reflectors, it tends to be more localized into the ‘prongs’ of a pitchfork shape, with cleaner regions in between them. Since most lunar array concepts employ dipoles for reasons of cost and simplicity, we can anticipate severe contamination throughout the entire wedge region. Cleaning foreground emission from inside the wedge region requires exceptionally good models of both the primary beams and the foreground emission however, and many ground-based arrays choose a more conservative ‘avoidance’ strategy instead. This is where 21 cm signal modes within the wedge region are assumed to be irretrievable and so are excised, while a variety of analysis choices are made to prevent power from leaking out of the wedge region into an otherwise clean ‘window’ region.
In the absence of a detailed plan to support extraction of 21 cm signal modes within the wedge region by characterizing the foregrounds and beam patterns of a lunar array with extreme precision, we can conservatively assume that modes within the horizon wedge are unusable for 21 cm cosmology (figure 1). More optimistically, calibration methods such as [22] may allow substantial suppression of the foregrounds however. We bracket the possibilities by producing forecasts for both horizon-wedge (pessimistic) and no-wedge (optimistic) scenarios.
(iii) Bandwidth and spectral resolution
The frequency axis of the observations simultaneously encodes the evolution of the signal with redshift, , and the variations of the 21 cm field in the radial or line-of-sight direction (represented by the radial Fourier wavenumber, ). It is common to make an approximation that the redshift evolution is negligible over a small frequency range (i.e. within a sufficiently narrow redshift bin), so that frequency channels can be mapped to a radial distance within a three-dimensional volume at a ‘fixed’ central redshift. The frequency resolution then gives the maximum radial Fourier wavenumber that can be measured in the volume. Similarly, the bandwidth over which the redshift evolution is neglected can be converted into a maximum radial extent of the three-dimensional volume, and so sets the fundamental radial mode, .
While there are methods that allow this redshift-binning approximation to be avoided, working with the Fourier-space power spectrum and bispectrum typically requires some kind of redshift binning to be done, and so we take it as an unavoidable aspect of our analysis here. The redshift bin width can be chosen to keep the cosmological evolution of the signal across the bin sub-dominant, and so will typically vary with frequency. As a somewhat maximal choice, we assume a 30% fractional bandwidth within each bin. For bins centred at ( MHz), this would equate to bandwidths of MHz.
(b) Representative baseline distributions
In this section, we attempt to distill the various mission concepts in the literature into a set of representative development stages for a lunar 21 cm array. We have used the CoDEX [23], FarSide/FarView [24,25] and ROLSS/DALI [26] concepts as the basis for the following.
(i) Stage I
A relatively small pathfinder array with a dense core, likely in a grid layout that is the minimum needed to test antenna deployment technologies and implement an FFT-based correlator, e.g. a ( antennae) or ( antennae) array. To soften the technical requirements, choices such as a reduced bandwidth (e.g. 20–60 MHz) and relatively short maximum baseline length can be made. For maximum and minimum baseline lengths of approximately km and 30 m, respectively, scales in the range would be accessible, i.e. around the top end of the BAO scale.
(ii) Stage II
A large array of around 10 000 antennae with a large core plus some outriggers to increase the maximum baseline length to around 5 km. Building on the technology demonstrated by the Stage I instrument, this could allow for deviations from an FFT-friendly regular grid into a more balanced layout, permitting longer baselines for imaging. Improvements in data rate, antenna design etc. would allow a wider bandwidth to be observed, perhaps in the 10–100 MHz range, covering both Cosmic Dawn and well into the Dark Ages, (note that we only consider in this paper). Angular scales in the range would be accessible with this instrument, permitting some initial imaging applications at the lowest redshifts.
(iii) Stage III
A giant array of 100 000 or more antennae over a large area of or so. This would further improve on the frequency range of Stage II, probing frequencies down to 5 MHz or so, and angular scales in the range .
In all cases, we employ a minimum baseline length of 20 m. This should be compared with the maximum wavelength of between approximately m (at 20 MHz and 5 MHz, respectively), and a typical dipole length of order approximately m. Neighbouring antennas will be within the near-field of one another at higher frequencies, and we can expect mutual coupling effects to be important.
Rather than defining an explicit baseline distribution for each stage, we attempt to capture representative distributions based on a fitting function for the circularly averaged density,
![Figure 2.](/cms/asset/d9153fe6-6875-483e-a89a-c372efbb57ed/rsta20230072f02.gif)
Figure 2. Notional baseline distributions for the three representative arrays. The functional form is chosen to have a similar form to the one for FarView for the Stage III experiment. Vertical dashed lines show the maximum baseline length for a regular square array with linear extent 500 m, 5 km and 10 km, respectively. The Stage I and II experiments have similar densities for the shortest baselines, differing only in the number of long baselines, consistent with extending a dense core by adding outriggers at intermediate and large distances.
configuration | no. antennas | approx. area | freq. range | redshift range |
---|---|---|---|---|
Stage I | 1 km 1 km | 20–60 MHz | 23–70 | |
Stage II | 5 km 5 km | 10–60 MHz | 23–140 | |
Stage III | 10 km 10 km | 5–60 MHz | 23–280 |
3. Fisher forecasting formalism
In this section, we describe the theoretical calculation of the 21 cm power spectrum and bispectrum during the dark ages, and the Fisher forecasting formalism that we use. We base our calculations on [5,27,35].
The Fisher matrix for the anisotropic bispectrum can be calculated as
(a) Model of the HI bispectrum during the Dark Ages
Our model for the HI bispectrum is given by (following [5])
The full expressions for these terms are given in [5], but we briefly describe them here. The functions and depend on the angle between the wavevectors and the line of sight , , on the mean 21 cm brightness temperature at redshift , , and on the derivative of the brightness temperature with respect to the linear baryon overdensity : . For the primordial bispectrum, also carries a dependence on (e.g. see [10]); for the gravitational contribution, also depends on the second-order perturbation theory kernels, and , and on ; and for the nonlinear part, also depends on . Finally, the primordial bispectrum can be constructed from a sum of contributions from different bispectrum shapes; we retain only the local-type bispectrum in this work, which can be calculated as
![Figure 3.](/cms/asset/95b169ac-b729-403d-8aea-eb3082fdc1f7/rsta20230072f03.gif)
Figure 3. Amplitude of the bispectrum at , for triangle configurations where one of the sides is fixed to . (a) The primordial bispectrum as a function of triangle configuration, assuming local-type non-Gaussianity with (note the higher amplitude in the top left). (b) The gravitational contribution to the bispectrum, which is significantly larger. (c) The uncertainty on the bispectrum assuming no thermal noise or instrumental effects (cosmic variance contribution only).
In a more comprehensive treatment, we could calculate the Fisher matrix for a range of cosmological parameters in order to account for their uncertainties, and correlations between the different parameters. This is largely a matter of evaluating the derivatives of equation (3.4) with respect to these parameters, as they appear in equation (3.1). In the present article, however, we focus only on the simplest case of an idealized forecast for a single parameter, . Since this parameter appears as a prefactor of only a single term in equation (3.4), the expression for the derivatives simplifies to
The final ingredient needed to evaluate the Fisher matrix for is an expression for the dark ages 21 cm power spectrum, which appears in equation (3.2). This is given by
(b) Model of the interferometer noise power spectrum
The last factor, , is the power spectrum of the instrumental noise. This term describes the noise variance on each Fourier mode measured by the radio array. This sets the noise level on each triangle configuration that contributes to the bispectrum.
Following the notation of [27], the noise power spectrum for an interferometer can be calculated from the radiometer equation as
Figure 4 shows the noise power spectra of the three stages of experiment, using the idealized baseline distributions discussed in §2. As a reference survey timescale, we assume h of observing time, which crudely represents an efficient 5-year survey with a 50% duty cycle, e.g. to account for flagging of data when the Sun is up or bright sources are in the sidelobes. The survey area is assumed to be in each case, with a field of view of steradians (). For a dipole-like antenna, the effective area is , where we assume a slightly enhanced gain of . The sharp cutoff at low is due to the assumed minimum baseline length of 20 m, although lower could potentially be recovered if calibrated zero-spacing (autocorrelation) data can be obtained. There is a clear redshift dependence, with a shift to lower as increases due to the frequency dependence of the fringe pattern of each baseline, and a shift to higher noise power as increases (frequency decreases) due to the increase in system (sky) temperature. As the array increases in size, the number of baselines increases on all scales, increasing the sensitivity by an order of magnitude or more between each generation. Larger arrays also have longer maximum baseline lengths, and so reach a higher effective maximum .
Figure 4. Noise power spectrum , as given in equation (3.11), assuming h of observation time (equivalent to 5 years with a duty cycle). The curves correspond to the three types of experiment: Stage I (blue), Stage II (red) and Stage III (orange), with solid lines denoting redshift and dotted lines . The increase in array size gains between one and two orders of magnitude in sensitivity between each stage. For comparison, the model considered in [29] has a dimensionless power spectrum of at and (about , i.e. below the detection threshold even for Stage III with these figures).
In what follows, we will assume that the noise power spectrum is sub-dominant, i.e. that we are in the sample variance-dominated limit. Reaching this limit would be a major technical feat, involving a very long survey duration, excellent control over systematic effects, calibration errors, and so on. It is nevertheless useful to consider this limit as giving the best constraints that could possibly be achieved with a given array configuration.
4. Results
In this section, we present Fisher matrix forecasts for the parameter measured from the HI brightness temperature bispectrum during the Dark Ages, using the formalism described in §3. We consider the three stages of experiment described in table 1, and show the impact of different assumptions about instrumental/scale cuts related to foreground contamination etc.
For all of the forecasts, we assume a flat CDM model with parameters and [30]. We restrict ourselves to the redshift range (frequencies between 7 and 60 MHz) in order to keep the same fitting functions for the HI brightness temperature etc. given in [5], and do not marginalize over uncertainties in astrophysical quantities that set the overall scale of the HI signal, or cosmological parameters that set the shape of the non-primordial contributions to the bispectrum. These assumptions will be relaxed in future work, and have been studied in past works, e.g. [5,29]. As explained in §2, we also make assumptions about the co-planarity of the baselines, and use a flat-sky limit that also neglects cosmological evolution within discrete redshift bins. In all the cases presented here, the redshift bins are chosen to be sub-bands with bandwidth at their respective centre frequencies. This is at the upper end of what is plausible if we wish to neglect cosmic evolution within each bin.
Figure 5 shows the forecast 68% CL error on the parameter for each generation of experiment as a function of redshift. Under the assumptions, we have made—particularly neglecting the thermal noise contribution—all three configurations produce similar results in redshift bins where they overlap.1 The main difference between the three stages largely comes down to the maximum redshift that each of them can reach. In practice, the Stage I array has about 300 times the noise level of the Stage III array at the same , and so would need to integrate for a much longer time to reach the same signal to noise ratio.
Figure 5. Cosmic-variance limited () Fisher forecast for the parameter for local-type primordial non-Gaussianities using the three stages of lunar 21 cm experiments. The results are shown for the foreground-free case (solid), i.e. , and with foregrounds (dashed), i.e. excluding the horizon wedge from the data. The channel width () sets the maximum radial scale included in the analysis. (Note that we use this as the maximum radial scale rather than a nonlinear cut-off, as would be more usual for low- experiments.)
A number of effects contribute to the shape of the curves in figure 5. The comoving volume of each redshift bin is a key factor in setting the sample variance limit, and increases significantly from low to high redshift. This is tempered by the increasing with redshift. Recall that encodes the amplitude of the primordial bispectrum in the squeezed limit, where there is one low- leg and two higher- legs to each triangle. The number of large-scale modes available to form the low- leg of the squeezed limit triangles also increases with . Redshift-dependent limits on of better than can be obtained at , surpassing the CMB and low-redshift galaxy surveys. In the case where the foreground wedge region must be excised completely from the data (figure 1), this degrades to around 0.05 at and 0.5 at , which is still competitive with current constraints. Significantly more evolution in with redshift is observed when the wedge is removed, as the size of the wedge region depends on wavelength.
We stress that these forecasts are optimistic in a number of ways. Most importantly, the secondary/gravitational bispectrum and astrophysical prefactors (i.e. ) have been assumed to be perfectly known. In [5], these terms were found to be important however, with the astrophysical prefactors exhibiting strong correlations with in their Fisher forecasts. Their bottom-line forecast for was weaker than our prediction, with predicted for a large array (other differences, such as their choice of larger 100 kHz channel widths, also contribute to the discrepancy). We note, however, that relatively mild priors on the astrophysical factors, e.g. from models fitted at lower redshift, should be helpful in breaking the strong correlations, potentially allowing better constraints on to be achieved than presented in [5]. In this case, our results should be considered to bound the possible values of from below, as we are operating in the optimistic sample variance-limited case.
5. Conclusion
In this paper, we reviewed the distinctive properties of the Dark Ages 21 cm brightness temperature field as a probe of early Universe physics, particularly as encoded by the local-type non-Gaussianity parameter . We examined how a lunar radio interferometer could be used to measure the 21 cm bispectrum while avoiding severe problems such as ionospheric distortion and radio frequency interference on Earth, and discussed how different instrumental effects contribute scale cuts that limit the number of Fourier modes of the three-dimensional 21 cm field that can be recovered. We then went on to define a set of three representative stages of the development of such lunar arrays, beginning with smaller ones with a few hundred antennae spread over a square kilometre, and culminating in a much larger array of a hundred-thousand or more antennae spread over . These were used in simple exploratory Fisher matrix forecasts to show how well the 21 cm bispectrum, and hence the parameter, could be measured under optimistic assumptions, such as neglecting thermal noise (i.e. obtaining sample variance-limited observations). In essence, our results show the best that a notional lunar 21 cm experiment could do in the absence of systematic effects in the parts of the data that remain after a series of scale cuts, and without limits on observing time.
We found that severe scale cuts that are applied to many ground-based 21 cm experiments (at lower redshift) in order to remove foreground contamination did degrade the predicted constraints on substantially, but that under our assumptions the signal could still be measured at a level that is competitive with CMB and galaxy survey experiments.
Overall, we suggest that a staged deployment of lunar 21 cm arrays (moving from compact with a few hundred antennae, to widely distributed with up to a hundred-thousand antennae) should provide a robust path to measuring the local-type non-Gaussianity parameter , with the prospect of substantially improved precision compared to ground-based experiments. While there are obvious engineering and cost challenges associated with deploying such arrays on the Moon, we have seen that the cosmological performance of the arrays survive even quite stringent scale cuts to remove foreground contamination and the like. In terms of future work, it would be desirable to go beyond the analytic approach that we have used here, and perform a direct demonstration of 21 cm bispectrum recovery on simulated data that include realistic instrumental effects such as foregrounds, calibration errors, antenna variations and mutual coupling. We have also commented on the need to marginalize over astrophysical uncertainties, which is likely to further reduce the forecasted precision.
Data accessibility
This article has no additional data.
Declaration of AI use
We have not used AI-assisted technologies in creating this article.
Authors' contributions
P.B.: conceptualization, formal analysis, investigation, project administration, software, supervision, validation, visualization, writing—original draft, writing—review and editing; C.G.: investigation, software, visualization, writing—original draft; C.A.: investigation, visualization.
All authors gave final approval for publication and agreed to be held accountable for the work performed therein.
Conflict of interest declaration
We declare we have no competing interests.
Funding
We acknowledge the Royal Society for travel support. This result is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 948764). C.G. is supported by STFC consolidated grant ST/T000341/1.
Acknowledgements
We are grateful to T. Flöss, D. Karagiannis, D. Meerburg, G. Orlando and J. Silk for useful discussions, and two anonymous referees for their helpful comments. This work made extensive use of the public code class [31], and the following python packages and libraries: numpy [32], scipy [33] and matplotlib [34].