Brains have an impressive ability to store information about the external world on time scales that range from seconds to years. The rules of information storage in neuronal circuits are the subject of ongoing debate. Two scenarios have been proposed by theorists: In the first scenario, specific patterns of activity representing external stimuli become fixed point attractors of the dynamics of the network. In the second, the network stores sequences of patterns of network activity so that when the first pattern is presented the network retrieves the whole sequence. In both scenarios, the correct dynamics are achieved thanks to appropriate changes in network connectivity. I will describe how methods from statistical physics can be used to investigate the storage capacity of such networks, and the statistical properties of network connectivity that optimizes information storage (distribution of synaptic weights, probabilities of specific network motifs, degree distributions, etc) in both scenarios. Finally, I will compare the theoretical results with available data on cortical connectivity.