Large models fuel impressive AI capabilities. However, they currently do not know what they do not know, which is critical to important challenges such as exploration and alignment. Knowing what is not known is essential, for example, to gathering informative feedback that can further improve AI capabilities. In this talk, I will discuss the need for scalable uncertainty and open issues in the design of architectures and training algorithms for epistemic neural networks, which could serve that need.