Large-scale science instruments, such as the distributed radio telescope LOFAR, show that we are in an era of data-intensive scientific discovery. Such instruments rely critically on significant computing resources, both hardware and software, to do science. Considering limited science budgets, and the small fraction of these that can be dedicated to compute hardware and software, there is a strong and obvious desire for low-cost computing. However, optimising for cost is only part of the equation; the value potential over the lifetime of the solution should also be taken into account. Using a tangible example, compute hardware, we introduce a conceptual model to approximate the lifetime relative science value of such a system. While the introduced model is not intended to result in a numeric value for merit, it does enumerate some components that define this metric. The intent of this paper is to show how compute system related design and procurement decisions in data-intensive science projects should be weighed and valued. By using both total cost and science value as a driver, the science output per invested Euro is maximised. With a number of case studies, focused on computing applications in radio astronomy past, present and future, we show that the hardware-based analysis can be, and has been, applied more broadly.