Abstract
Work in online learning traditionally considered induction-friendly (e.g. stochastic with a fixed distribution) and induction-hostile (adversarial) settings separately. While algorithms like Exp3 that have been developed for the adversarial setting are applicable to the stochastic setting as well, the guarantees that can be obtained are usually worse than those that are available for algorithms that are specifically designed for stochastic settings. Only recently, there is an increasing interest in algorithms that give (near-)optimal guarantees with respect to the underlying setting, even in case its nature is unknown to the learner. In this paper, we review various online learning algorithms that are able to adapt to the hardness of the underlying problem setting. While our focus lies on the application of adaptive algorithms as meta-inductive methods that combine given base methods, concerning theoretical properties we are also interested in guarantees that go beyond a comparison to the best fixed base learner.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 433–450 |
Seitenumfang | 18 |
Fachzeitschrift | Journal for general philosophy of science = Zeitschrift für allgemeine Wissenschaftstheorie |
Jahrgang | 54.2023 |
Ausgabenummer | 3 |
DOIs | |
Publikationsstatus | Veröffentlicht - 7 Okt. 2023 |
Bibliographische Notiz
Funding Information:The author would like to thank the two anonymous reviewers for their valuable comments. This work has been supported by the Austrian Science Fund (FWF): TAI 590-N and I 3437-N33 in the framework of the CHIST-ERA ERA-NET (DELTA project).
Publisher Copyright:
© 2022, The Author(s).