We present the determination of a set of parton distributions of the nucleon, at next-to-leading order, from a global set of deep-inelastic scattering data: NNPDF1.0. The determination is based on a Monte Carlo approach, with neural networks used as unbiased interpolants. This method, previously discussed by us and applied to a determination of the nonsinglet quark distribution, is designed to provide a faithful and statistically sound representation of the uncertainty on parton distributions. We discuss our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the evolution equations and its benchmarking, and the method used to compute physical observables. We discuss the parametrization and fitting of neural networks, and the algorithm used to determine the optimal fit. We finally present our set of parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent parton sets. We use it to compute the benchmark W and Z cross sections at the LHC. We discuss issues of delivery and interfacing to commonly used packages such as LHAPDF.
|Journal||Nucl Phys B|
|Publication status||Published - 8 Aug 2008|
Bibliographical note73 pages, 16 figures: final version, to be published in Nucl. Phys. B. Two figures added (fig6 and fig.10); discussion on inconsistent data added (sect 5.5); one reference added; numerous typos corrected and a few clarifications added. Missing factor of r_f supplied in eqs. 132,133,137,138,142,143,149,150,153 and 154