In this talk I will discuss a classical simulation scheme for linear-optical experiments subject to particle losses, focusing on under which conditions it is efficient. Concretely, consider the canonical boson sampling scenario in which an n-photon input state propagates through an m-mode
linear-optical network and is subsequently measured in the output modes. I will describe simulations
of this system based on two models of losses. In the first model, a fixed number of particles is lost,
and we show that the output statistics can be efficiently approximated provided the number of photons remaining grows sufficiently slowly. In the second loss model, every time a photon passes through a beamsplitter in the network it has some probability of being lost. For this model the relevant parameter is s, the smallest number of beamsplitters that any photon must traverse. We prove it is possible to approximate the output statistics already if s grows logarithmically with m, regardless of the geometry of the network. The latter result is obtained by proving it is always possible to commute s layers of uniform losses to the input of the network. Finally, I discuss how these findings put limitations on future experimental realizations of quantum computational supremacy proposals based on boson sampling.