Learning from data is a fundamental problem in statistics that has applications in almost every aspect of our lives. There are two natural classes of strategies for learning from data. Non-adaptive strategies collect all the required data and then processes all this data in one batch, while adaptive strategies can use the accumulated information to decide whether more samples should be collected. Furthermore, with adaptive methods the way that new information is gathered can also be adapted according to what was observed in previous steps. Since our goal is to minimise the required amount of data, in this thesis, we study the optimal number of resources for both approaches in various classical and quantum learning problems.
For testing classical distributions, we show that adaptive strategies outperform their non-adaptive counterparts by a factor of four when the alphabet size is small. In addition, adaptive strategies can stop earlier when the tested distributions are far apart from each other. This advantage holds even with larger alphabets.
Concerning testing quantum states, we exhibit situations where adaptive strategies have provable advantage against the non-adaptive ones. These include the (binary) hypothesis testing, testing identity and closeness of quantum states.
In certain situations however, we have multiple ways of querying the data. These situations can be modelled by quantum channels that output data after receiving an input chosen by the learning strategy. These inputs can also be adapted to the previous observations by adaptive strategies while non-adaptive ones should determine all inputs in advance. We determine the optimal complexity of testing whether a channel is perfect or not. Moreover, we characterise the optimal number of resources for learning a general channel or a Pauli noise in the non-adaptive setting. Furthermore, we reduce the gap between adaptive and non-adaptive strategies for learning Pauli noise.