A lot of progress has been made in recent years in the development of atomistic potentials employing machine learning (ML) techniques. In contrast to most conventional potentials, which are based on physical approximations, ML potentials rely on simple but very flexible mathematical expressions without a direct physical meaning with the aim to reproduce a set of reference electronic structure data as accurately as possible. Due to this bias-free construction they are applicable to a wide range of systems without changes in their functional form, and a very high accuracy close to the underlying first-principles data can be obtained. Neural network potentials (NNPs), which have first been proposed about two decades ago, are an important class of ML potentials. While the first NNPs have been restricted to small molecules with only a few degrees of freedom, they are now applicable to high-dimensional systems containing thousands of atoms, which enables addressing a variety of problems in chemistry, physics and materials science. In this talk the underlying concepts of high-dimensional NNPs are presented with a special focus on constructing NNPs for condensed systems. Applications to various types of systems, from solids via liquid water to solid-liquid interfaces are presented.