Assessing non-linear models for galaxy clustering I: unbiased growth forecasts from multipole expansion

We assess the performance of the Taruya, Nishimichi and Saito (TNS) model for the halo redshift space power spectrum, focusing on utilising mildly non-linear scales to constrain the growth rate of structure f. Using simulations with volume and number density typical of forthcoming Stage IV galaxy surveys, we determine ranges of validity for the model at redshifts z = 0.5 and z = 1. We proceed to perform a Bayesian MCMC analysis utilising the monopole, quadrupole, and hexadecapole spectra, followed by an exploratory Fisher matrix analysis. As previously noted in other forecasts as well as in real data analyses, we find that including the hexadecapole can significantly improve the constraints. However, a restricted range of scales is required for the hexadecapole in order for the growth parameter estimation to remain unbiased, limiting the improvement. We consistently quantify these effects by employing the multipole expansion formalism in both our Fisher and MCMC forecasts.


INTRODUCTION
Forthcoming large-scale structure (LSS) surveys such as EUCLID 1 (Laureijs et al. 2011), WFIRST 2 , and the Dark Energy Spectroscopic Instrument (DESI) 3 , are promising to deliver exquisite cosmological measurements and test the laws of gravity with unprecedented precision. The standard cosmological model, ΛCDM, has been shown to provide an excellent fit to the data from a suite of CMB and LSS surveys (Ade et al. 2016;Anderson et al. 2013;Song et al. 2015;, assuming that General Relativity is the correct description of gravity across all scales. A plethora of modified gravity models have been proposed, motivated by the possibility of explaining the late time accelerated expansion of the Universe without the need of a cosmological constant (see Clifton et al. 2012, for a comprehensive review). Future LSS measurements with Stage IV spectroscopic galaxy surveys are aiming to measure the logarithmic growth rate of structure f , which strongly depends on cosmology and gravity (Guzzo et al. 2008). This will be achieved by probing the redshift space distortion (RSD) signature with the galaxy power spectrum or correlation function (Blake et al. 2011;Reid et al. 2012;Macaulay et al. 2013;Beutler et al. 2014;Gil-Marín et al. 2016a;Simpson et al. 2016).
The volume and number of galaxies probed by Stage IV surveys are very large, meaning that these surveys will not be limited by statistical uncertainties, but rather by our ability to deal with systematic effects and theoretical uncertainties. The latter include non-linear effects on the redshift space galaxy clustering. In the case of RSD measurements, these have to be understood and modelled properly so that we can confidently utilise the data from mildly non-linear scales to improve the constraints on f .
For this purpose, we consider the commonly used TNS model (Taruya et al. 2010). This model can reproduce the broadband power spectrum including RSD from simulations at linear and mildly non-linear scales Taruya et al. 2013;Ishikawa et al. 2014;Zheng and Song 2016;Gil-Marín et al. 2016a,b;Bose and Koyama 2016). The model is combined with the generalised bias prescription of McDonald and Roy (2009). This is very similar to what has been used in part of the BOSS survey data analysis . We use a set of COLA simulations (Tassev et al. 2013;Howlett et al. 2015;Valogiannis and Bean 2017; to determine a range of validity for the models at redshifts z = 0.5 and 1. We then perform a Bayesian MCMC analysis with the goal of assessing the constraining power of Stage IV-like surveys while also confirming the validity of the models across the range of scales we determined previously. Our focus is getting unbiased constraints on f using information from the first three multipoles: the monopole (P 0 ), quadrupole (P 2 ), and hexadecapole (P 4 ) of the power spectrum. We then move on to an exploratory Fisher matrix forecast analysis using the full anisotropic power spectrum, P (k, µ), in addition to the multipole expansion formalism. Our final estimates for the unbiased measurements on f are determined consistently from the MCMC and Fisher multipole expansion forecasts analyses. This paper is organised as follows: In section 2 we present the TNS-based biased tracer RSD model we assemble, and the fits to simulations to determine the fiducial nuisance parameters and model range of validity. In section 3 we perform a Bayesian MCMC analysis and present results, followed by a Fisher matrix analysis in section 4. We then present a comparison between Fisher and MCMC using the multipole expansion formalism. In section 5 we summarise our findings and conclude. Our Fisher matrix codes have been made publicly available at https://github.com/Alkistis/GC-Fish-nonlinear.

Model description
The TNS model (Taruya et al. 2010) is based on standard Eulerian perturbation theory (SPT), which assumes that the background space-time is homogeneous and isotropic, and that we work within the Newtonian regime at mildly non-linear scales. These scales are far within the horizon but with δ, θ 1, where δ and θ are the density and velocity perturbations respectively. We also assume that gravity is described by general relativity. The explicit model is given by where f is the logarithmic growth rate, µ is the cosine of the angle between k and the line of sight and P g are the 1-loop galaxy power spectra with the bias model of McDonald and Roy (2009) implicitly included 4 , while the A, B and C terms are non-linear perturbative corrections arising from the transformation to redshift space. The terms in brackets are all constructed within SPT 5 , while the prefactor, D FoG , is phenomenological. Here we choose a Lorentzian form (Gil-Marin et al. 2012;Percival and White 2009;Taruya et al. 2010;Peacock and Dodds 1994) where σ v is a redshift-dependent free parameter and represents the velocity dispersion of perturbations at cluster scales. We again refer the reader to ; Taruya et al. (2010) for the formulas for the perturbative components of the model, along with the explicit dependency on the independent free bias parameters {b 1 , b 2 , N }. As was done in , we ignore all other bias terms under the local Lagrangian assumption (Sheth et al. 2013;Chan et al. 2012;Saito et al. 2014;Baldauf et al. 2012).
We emphasise again that this model is very similar that used in the recent BOSS analysis .
We choose it to make use of its thorough validation with simulations and mock catalogues. It is also worth noting that it has shown robustness when considering alternative theories of gravity (for example Bose and Koyama 2016). However, we stress that there are key differences in our model. We choose a Lorentzian rather than a Gaussian damping factor in Equation 2 6 , we include the C(k, µ) term in Equation 1 and use SPT rather than RegPT (Taruya et al. 2012), the latter being the biggest difference. SPT is known to suffer from divergences in the loop expansion at low redshifts (see Carlson et al. 2009, for example), which the re-normalisation scheme of the RegPT approach addresses. Despite this, SPT clearly does well for z ≥ 1 (Carlson et al. 2009;Osato et al. 2018). Our work finds that it also works well for z = 0.5 given our derived goodness of fit to the data. This was also suggested in other works (Carlson et al. 2009;Vlah et al. 2015;. Further, in Bose et al. (2019a) the authors also show that SPT provides a better fit to the redshift space halo multipoles at both z = 0.5 and z = 1 than when using the RegPT prescription. Given these findings, it would be very interesting to repeat the BOSS data analysis with this variant of the TNS model.
The full set of nuisance parameters in this model is therefore {σ v , b 1 , b 2 , N }. In our MCMC and Fisher matrix analyses we will vary these nuisance parameters along with the cosmological parameter of interest, the growth of structure f . Our chosen set of parameters is restricted. Perhaps most importantly, we do not include variations of the background "shape" parameters or the Alcock-Paczynski effect. There are two reasons for this. Firstly, our goal is to demonstrate the trade-off between our constraining power on f and the bias on its estimation as a function of the k-ranges used from the monopole, quadrupole, and hexadecapole spectra. We then wish to provide a consistent comparison between Fisher matrix and MCMC forecasts. This can be achieved without an extensive set of parameters. Secondly, varying extra parameters in the MCMC is computationally expensive, since all the model components have to be calculated for every sample. Using a restricted set mitigates this problem. However, optimising our code for speed is under development and we hope to present an analysis with all parameters of interest in future work. We also note that while it is customary to present constraints on (f σ 8 )(z), and indeed the BOSS analysis uses this parametrisation, other analyses have opted to present constraints on f alone, e.g. Blake et al. (2011). The reason for this choice in a real data analysis is to test if the Planck best-fit model also predicts the observed growth of structure by the galaxy survey.
Finally, we note that great progress has been made since the TNS model was first proposed, in particular in relation to the relationship between bias and redshift space distortions (see Desjacques et al. (2018a) for a review). Despite this, the TNS model with this minimal bias model remains one of the simplest and most economical phenomenological models. For example, a fully consistent treatment of the halo power spectrum in redshift space discussed requires many more free parameters (Desjacques et al. 2018b).

Comparison to Simulations
This section is dedicated to determining a rough range of validity for Equation 1 as well as fiducial values for the nuisance parameters. To do this, we make use of a set of four Parallel COmoving Lagrangian Acceleration (PICOLA) simulations (Howlett et al. 2015;) of box length 1024 Mpc/h with 1024 3 dark matter particles and a starting redshift z ini = 49. These are all run within the same ΛCDM cosmology taken from WMAP9 (Hinshaw et al. 2013): Ω m = 0.281, Ω b = 0.046, h = 0.697, n s = 0.971 and σ 8 (z = 0) = 0.844.
Halo catalogues from these simulations are constructed using the Friends-of-Friends algorithm with a linking length of 0.2-times the mean particle separation. The halo spectra are measured at redshifts of z = 0.5 and z = 1 and use all halos above a mass of M min = 4 × 10 12 M . This corresponds to a number density of n h = 1 × 10 −3 h 3 /Mpc 3 which is similar to that estimated for Stage IV surveys galaxy number density around the redshifts considered (Amendola et al. 2018).
As is commonly done in real data analyses, the power spectrum is decomposed into its multipoles. The PICOLA multipoles are measured using the distant-observer approximation 7 and averaged over three line-of-sight directions. We further average over the four PICOLA simulations. We note here that (PI)COLA is an approximate method and should be validated against full N-body to ensure any comparisons to the measurements are meaningful. In Izard et al. (2016) comparisons of COLA with full N-body measurements are made, and sufficient agreement is found in the halo monopole and quadrupole. Furthermore, in Bose et al. (2019b), comparisons of COLA with full N-body are made for the halo monopole, quadrupole and hexadecapole using the same COLA code and simulation specifications used in this work. The authors find the COLA approach to be sufficiently accurate at the scales and redshifts considered here.
On the theoretical side, the multipoles are expressed as where P (µ) denote the Legendre polynomials and P S (k, µ) is given by Equation 1. For our fitting analysis, we utilise only the monopole ( = 0) and quadrupole ( = 2) since they include most of the clustering information. The hexadecapole is later considered in section 3, where we perform an MCMC analysis on the PICOLA data.
Our maximum scale in k (the model's range of validity) will be denoted as k max . To determine this we follow the following procedure: 1. We fix all cosmological parameters including the growth rate f and perform a least χ 2 fit to the PICOLA data by varying the model nuisance parameters 8 . In our χ 2 analysis we use all data bins from k min = 0.006 h/Mpc to k max , with k max being varied in the range 0.125 h/Mpc ≤ k max ≤ 0.3 h/Mpc.

2.
We take the 95% (2σ) confidence intervals (2∆χ 2 red ) on a χ 2 distribution with N dof degrees of freedom. Since N dof is large in our analysis the errors are approximately symmetric. 3. The criterion we use to calculate the final k max for the rest of the analysis is the maximum k-value which gives χ 2 red (k max ) − 2∆χ 2 red (k max ) ≤ 1. Roughly, this gives an indication of which range of scales the model is reliable at without biasing cosmological parameter estimates at the required 2σ level 9 . The reduced χ 2 statistic is given by where Cov , is the Gaussian covariance matrix between the different multipoles and k min = 0.006 h/Mpc. The number of degrees of freedom N dof is given by N dof = 2 × N bins − N params , where N bins is the number of k−bins used in the summation and N params is the number of free parameters in the theoretical model. Here, N params = 4 for the TNS model of Equation 1. Finally, the bin-width we use is ∆k = 0.006 h/Mpc.
We use linear theory for the covariance matrix between the multipoles (see Appendix C of Taruya et al. 2010, for details). This has been shown to reproduce N-body results up to k ≤ 0.3 h/Mpc at z = 1. A linear covariance also seems to work well at z = 0.5 up to k ≤ 0.2 h/Mpc at z = 0.5, as shown very recently in Sugiyama et al. (2019). We note that for analysing real data from Stage IV surveys validated analytical approximations will likely be used alongside numerical covariances constructed using mocks. In the covariance matrix we assume a number density of n h = 1 × 10 −3 h 3 /Mpc 3 and a survey volume 10 of V s = 4 Gpc 3 /h 3 .
In Figure 1 we show the minimized χ 2 red (k max ) for z = 0.5 and z = 1 for the TNS model, with the associated 2σ error bars. We determine k max = 0.227 h/Mpc and k max = 0.276 h/Mpc at z = 0.5 and z = 1 respectively. The larger k max at z = 1 is expected due to less non-linear structure formation at higher redshifts. We summarise the best fit nuisance parameters and details of the fit in Table 1. We also plot the best fit TNS multipoles against the PICOLA data in Figure 2. In the bottom panels of Figure 2 we show the residuals, that is the difference in theoretical prediction to simulation measurement divided by the errors coming from the covariance matrix. Linear theory (Kaiser 1987) is also shown in green as a reference, where we use b 1 as measured from the simulations and the fiducial value of f .

MCMC ANALYSIS
In this section we perform Bayesian MCMC analyses at redshifts z = 0.5 and 1. We model our log-likelihood using Equation 4, and vary the growth rate f and nuisance parameters of the TNS model outlined in section 2.
Our approach has two purposes. The first is to check how biased the f estimates can be at our derived k max chosen in Table 1. The second is to provide estimates for the f constraints using our version of the TNS model and Stage IV-like specifications, and assess the improvement when adding the hexadecapole.
Our results at z = 0.5 are shown in the top panel of Figure 3. We first utilise the monopole and quadrupole spectra (P 0 ,P 2 ) only at the range of scales determined in section 2. The Figure shows the TNS model's recovery of f at k max = 0.227 h/Mpc (red contours). We can see that the fiducial value is safely recovered with a 2σ criterion (same as in section 2), and we have also checked that using a higher k max the estimates become more biased.
We then add the hexadecapole, P 4 . We find that the estimates of the growth rate f is biased if we take the hexadecapole up to the same k max found in Table 1. That is because the TNS model is not flexible enough to account for the hexadecapole at this range of scales. This has also been seen in the BOSS data analysis in . However, motivated by Figure 2, we can consider the hexadecapole up to a more conservative value, k max,4 = 0.129 h/Mpc, without biasing the f estimate (blue contours). This is again similar to what has been done in the BOSS data analysis in . Our MCMC estimates for the fractional error on f (z = 0.5) with this process are 3.6% using the monopole and quadrupole, and 3.2% when adding the hexadecapole with the restricted range of scales. We will refer to this case as P 0 + P 2 + P 4 | restricted throughout the paper.
Our results at z = 1 are shown in the bottom panel of Figure 3. We follow a similar procedure as before, with k max = 0.276 h/Mpc for the monopole and quadrupole, and k max,4 = 0.05 h/Mpc for the hexadecapole in order for the f estimate to remain unbiased at our required 2σ level. We have checked that a higher k max,4 biases estimates of f . This k max,4 is lower than the one used at z = 0.5 which may seem counterintuitive. We can explain this in terms of the model's flexibility. At k max = 0.276 h/Mpc the model is already being severely tested and therefore it cannot account for P 4 to any higher k max,4 without becoming biased. Our MCMC estimates for the marginalised fractional error on f (z = 1) with this process are 3% using the monopole and quadrupole, and 2.6% when adding the hexadecapole with the restricted range of scales, P 0 + P 2 + P 4 | restricted .  Before moving on to the Fisher matrix analysis, it is useful to perform another set of MCMC analyses to get a sense of the tradeoff between bias and constraining power in f depending on k max . At redshift z = 0.5, in addition to the P 0 + P 2 + P 4 | restricted presented above we perform two additional MCMC analyses using the full P (k, µ) (equivalent to P 0 + P 2 + P 4 with equal k max for monopole, quadrupole, and hexadecapole). We first set k max = 0.227 h/Mpc, followed by a more conservative k max = 0.129 h/Mpc. The results are shown in Figure 4. For the full P (k, µ) with k max = 0.227 h/Mpc (red dashed line) we find f = 0.520 ± 0.013, a heavily biased estimate compared to the true value of f = 0.733 in the PICOLA simulations (thick black dashed-dot line). For the more conservative full P (k, µ) with k max = 0.129 h/Mpc (green dotted line) we see that the true f is within the 1σ contour, f = 0.698 ± 0.037. For the P 0 + P 2 + P 4 | restricted case, the estimate is unbiased at the 2σ level, f = 0.782 ± 0.025.

FISHER MATRIX ANALYSIS
In this section we are going to perform an exploratory Fisher matrix analysis to forecast constraints on f using the TNS model. Since we wish to compare the Fisher results with the ones from our MCMC analysis, we will use the Fisher matrix formalism written in terms of multipoles. This allows us to choose different ranges for the monopole, quadrupole, and hexadecapole spectra, to mimic the procedure followed in our MCMC analysis in the previous section. We will also perform a comparison between this method with the most commonly used full anisotropic power spectrum method. This allows us to quickly assess how the requirement for unbiased f estimates limits the improvement with respect to the case where the hexadecapole is added assuming the same k max as in the monopole and quadrupole analysis.

Fisher matrix formalism using multipole expansion:
Here we summarise the multipole expansion formalism for Fisher forecasts. We refer the reader to  for a comprehensive description. The authors of  use the TNS model equiped with a linear deterministic bias, noting that this assumption is idealistic and a more accurate prescription is essential for realistic forecasts. They then perform a Fisher matrix analysis to investigate the relative contributions of the different multipoles (P 0 , P 2 and P 4 ) if taken at the same k max , and compare with the full P (k, µ) formalism. Hence, an important distinction between the forecasts performed in this work and that of  is (i) our implementation of a non-linear bias prescription that corresponds to a more accurate theoretical template that could readily be applied to the data (ii) the use of different k max for the different multipoles to ensure unbiased forecasts and (iii) an MCMC analysis alongside the Fisher matrix analysis to consolidate our findings.
In terms of multipoles, the Fisher matrix for a set of parameters {p} is given by (Fisher 1935;Tegmark 1997;) with Cov being the reduced covariance matrix: at z = 0.5 and 1. The full P (k, µ) case is equivalent to taking P 0 , P 2 and P 4 to the same kmax. We also show P 0 + P 2 only, as well as the P 0 + P 2 + P 4 | restricted case. Note that we have used the MCMC means as the fiducial Fisher matrix parameter values, to allow for a direct comparison with the MCMC.
Note that in our analysis, we promote k max → k max, in Equation 5, in order to be able to study the P 0 +P 2 +P 4 | restricted case. Also, since we want to mimic our assumptions in the MCMC analysis, we use linear covariance, which means that P (k, µ) in the brackets of Equation 6 is given by the linear formula (Kaiser 1987).
Fisher matrix formalism using the full (2D) anisotropic power spectrum: Considering the full power spectrum signal in redshift space, the Fisher matrix becomes (Tegmark 1997;) Using the Fisher matrix formalism, the forecasted errors on parameter p i , marginalised over all other parameters, are given by the square root of the diagonal of the inverse of the Fisher matrix as We are now ready to present our Fisher matrix analysis results. In Figure 5 we compare the full P (k, µ) Fisher matrix results 11 with the results using only P 0 and P 2 up to the k max determined in Table 1, and the ones found when adding the hexadecapole with the restricted range of scales in order for the f estimates to remain unbiased (P 0 + P 2 + P 4 | restricted ).
In both redshifts, z = 0.5 and z = 1, it is clear that the full P (k, µ) treatment gives much better constraints on f . More specifically, using the full P (k, µ) formalism we find a 2.4% constraint on f compared to 3.8% in the P 0 + P 2 + P 4 | restricted case at z = 0.5. At z = 1, we find 1.4% compared to 2.9% in the P 0 + P 2 + P 4 | restricted case, a factor of ∼ 2 difference.

Comparison between Fisher matrix and MCMC analysis
We will now present the comparison between Fisher matrix and MCMC results, in the unbiased, P 0 +P 2 +P 4 | restricted case. For this purpose we will focus on comparing the (f, σ v ) contours, since σ v is the most correlated nuisance parameter with the cosmological parameter of interest f . We include the full range of comparison plots in Appendix A.
The results are shown in Figure 6. At z = 0.5, we find remarkable agreement between Fisher and MCMC, demonstrating that the multipole expansion formalism method is both reliable and robust. At z = 1, we have plotted two Fisher ellipses: the dotted one, denoted by Fisher| deg , corresponds to the Fisher matrix results presented in Figure 5. We see Full P (k, µ) Fisher MCMC Fig. 6.-Fisher and MCMC comparison for the P 0 + P 2 + P 4 | restricted case. We also show the full P (k, µ) Fisher ellipses, but note these correspond to biased f estimates (see Figure 4). As described in the main text, the Fisher| deg contour (dotted line) at z = 1 corresponds to the Fisher matrix shown in the bottom panel of Figure 5, with a near perfect degeneracy between the (b 2 , N ) nuisance parameters. The Fisher contour (solid line) is the result after imposing a conservative prior on N that breaks the degeneracy just enough to mitigate the instability it induces. We emphasise that in both cases the final marginalised f constraints remain stable and in very good agreement with the MCMC.
that the area of the (f, σ v ) ellipse is larger than the MCMC one. Inspecting Figure 5 at z = 1, we see that there is a near-perfect degeneracy between the (b 2 , N ) parameters 12 . We can quantify this using the correlation coefficient r given by This characterises parameter degeneracies: r = 0 means p i and p j are uncorrelated, while r = ±1 means they are completely (anti)correlated. In the case we are concerned with at z = 1, we find r(b 2 , N ) = −0.999. This signals a possible instability in the Fisher matrix that might be responsible for the disagreement with the MCMC. To investigate this, we impose a conservative ∼ 50% prior on the N parameter and rerun the Fisher matrix analysis. This breaks the degeneracy just enough to mitigate the instability, and gives the excellent agreement shown with the solid black line. However, we emphasise that despite this instability, the marginalised constraint on f is in both cases (with and without the prior correction) in very good agreement with the MCMC. This is because it is the (f, σ v ) degeneracy that affects mostly the final, marginalised error on f . The careful reader will notice that there are some differences between the Fisher and MCMC results, especially regarding the improvement when including the hexadecapole at z = 0.5. This seems to give only marginal gains in the constraining power when considering the Fisher matrix, disagreeing with the improvement shown by the MCMC forecast. Given that the MCMC contours are not perfect Gaussian ellipses we would not expect perfect agreement with the Fisher ellipses. The disagreement is worse at z = 0.5 and the MCMC contours there also look less "Gaussian" than the ones at z = 1, so this is not very surprising. We note that similar subtleties have been seen before in Fisher and MCMC comparison studies, and we refer the interested reader to Wolz et al. (2012); Hawken et al. (2012). The final, marginalised f constraints from all methods we have used are summarised in Table 2.

SUMMARY AND CONCLUSIONS
In this paper we have assessed the performance of the commonly used TNS model in the context of Stage IV galaxy surveys, considering the multipoles of the redshift space halo power spectrum as our observable. We considered two redshifts, z = 0.5 and z = 1, and made use of PICOLA simulations to perform maximum likelihood, MCMC, and Fisher matrix analyses. Here we summarise our main results and conclude.
We first determined the TNS model's ranges of validity using the monopole and quadrupole power spectra, which contain most of the cosmological information. We note that while this approach is appropriate for the purposes and limitations of this work, in an actual data analysis the bias on f needs to be evaluated directly as a function of k max . In Bose et al. (2019b), this point is investigated thoroughly by running a large number of MCMC analyses. In the MCMC analysis that followed, we varied the model's four nuisance parameters and the logarithmic growth rate, f . 12 A contributing factor to this could be the loop integral involving b 2 2 , that asymptotes to a constant at low k (Desjacques et al. 2018a).
TABLE 2 Marginalised percent errors (at 1σ) on f from the MCMC and Fisher analyses at z = 0.5 and z = 1. We utilise the monopole and quadrupole up to the kmax given in Table 1, and the hexadecapole up to a conservative (restricted) k max,4 , as detailed in the main text. For comparison we show the results for the full P (k, µ) Fisher matrix case, and warn that this falsely small error corresponds to biased f estimates.
Analysis z = 0.5 z = 1 MCMC: P 0 + P 2 3.6% 3.0% Fisher: P 0 + P 2 3.9% 3.1% MCMC: P 0 + P 2 + P 4 | restricted 3.2% 2.6% Fisher: P 0 + P 2 + P 4 | restricted 3.8% 2.9% Fisher: Full P (k, µ), biased 2.4% 1.4% The analysis using the k max from Table 1 shows a significant degeneracy between f and σ v that has also been found previously (Zheng et al. 2017;. The improvement on the TNS constraints at z = 1 is mainly due to the much higher k max at z = 1 compared to that at z = 0.5. To investigate the possible improvement by adding information from the hexadecapole, we performed two distinct MCMC analyses at z = 0.5 and 1. One excludes the hexadecapole using the range of validity found in section 2, and one includes it. It has been shown that adding the hexadecapole up to the same k max as the monopole and quadrupole biases the estimation of f (see also Figure 4). We followed what was done in the BOSS analysis in , and restricted its range to a maximum k max,4 = 0.129 h/Mpc for z = 0.5 and k max,4 = 0.05 h/Mpc for z = 1, so that the estimation of f remains unbiased at the required 2σ level. Our results are summarised in Table 2.
The addition of the hexadecapole with a restricted k max improves the f constraints, but at a much lower level than using the full P (k, µ) method, which is equivalent to taking P 0 , P 2 and P 4 up to the same k max and leads to biased estimates of f .
Finally, we performed a comprehensive Fisher matrix analysis to quickly explore the parameter space and test whether we can reproduce our MCMC analysis results. Using the multipole expansion formalism for the Fisher matrix we reached very good agreement with the MCMC, as shown in Table 2. We also compared the P 0 + P 2 + P 4 | restricted case with the full P (k, µ) case, showing that the former gives much more conservative error estimates (as well as avoiding bias in the estimate of f itself).
It is useful to try and assess how our forecasted constraints on the growth rate, given the k max cuts, compare to current requirements for Stage IV surveys. We will refer to the Euclid-like forecasts for the growth rate f presented in Amendola et al. (2018) (see Table 4). At z = 1 with the same survey volume as we use, they find a fractional error on f of 1.4% in their pessimistic scenario, which has assumed a number density 16% lower than ours. Their adopted model is linear Kaiser multiplied by the BAO smearing function proposed in ). They use the full P (k, µ) with a k max ∼ 0.2 h/Mpc. Their set of redshift dependent parameters includes the growth rate f (z), the linear bias b(z), the residual shot noise P s , the angular diameter distance D A (z), and the Hubble rate H(z), but the latter two are not marginalised over. Instead, they are projected onto the set of parameters they depend on, which helps with the information on the background (shape) parameters they also vary. Since the models, parameter sets, forecasting method, and k max choices are different, it is very difficult to compare directly, but note that we find the same fractional error of 1.4% using the full P (k, µ), and a factor of 2 larger error using the hexadecapole cut needed to keep f (z) unbiased at the 2σ level.
In summary, our study reinforces the need for accurate modelling of the non-linear redshift space power spectrum in light of upcoming Stage IV galaxy surveys. It also shows how reliable forecasts can be obtained if the forecast procedure closely follows what is being done in real data analyses. To our knowledge, a forecasting galaxy clustering analysis for Stage-IV surveys using an accurate theoretical template that can be readily applied to data, and the multipole expansion formalism with different, realistic k max limits informed by simulations, has not been performed in the literature. In addition, we stress that even if a fit to simulations is not available, one should still perform conservative forecasts using the multipole expansion formalism with different ranges for the different multipoles.
That being said, the model considered here is partly phenomenological, and ideally one would want a fully perturbative model that consistently models bias and redshift space distortions, and under which one can obtain a handle on the modelling uncertainties in very general cosmological settings. Several studies have been performed in this direction recently (see, for example, Desjacques et al. 2018b,a;Ivanov and Sibiryakov 2018;Ding et al. 2018;Hand et al. 2017). In the next paper of this series (Bose et al. 2019a) we also push in this direction and perform an extensive investigation using other proposed models like the Effective Theory of Large Scale Structure (see, for example, Perko et al. 2016;de la Bella et al. 2018;Lewandowski et al. 2018), and compare their performance to the TNS model.

ACKNOWLEDGMENTS
We would like to thank the two anonymous referees for their very useful comments and suggestions that improved the quality of this manuscript. We are grateful to Hans Winther for providing the simulation data. We thank Florian Beutler for useful discussions.