Recent releases of frontier artificial intelligence (AI) models have largely been gated, providing the benefit of limiting the proliferation of increasingly powerful dual-use capabilities. However, such release strategies introduce the challenge of providing sufficient access to the model in order to enable external research and evaluation. One potential solution is to give external researchers minimal sufficient access to systems through structured access, allowing them to conduct research on frontier models, whilst minimising proliferation risks. In this paper we address the question of what access such solutions should provide in order to facilitate external research. We develop a ‘taxonomy of system access’, conduct a literature analysis, focussing on the access most frequently used in prior research, and conduct semi-structured interviews with AI researchers to build a more detailed picture of what access they find most important for their work. Our findings show that access to relevant models frequently limits research, but that the access required varies greatly depending on the specific research area. Based on our findings, we make recommendations for the design of a ‘research API’ for facilitating external research and evaluation of proprietary frontier models.