Vi bøger
Levering: 1 - 2 hverdage
Forlænget returret til d. 31. januar 2025

Multi-Armed Bandits - Qing Zhao - Bog

Bag om Multi-Armed Bandits

Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools¿Bayesian and frequentist¿of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.

Vis mere
  • Sprog:
  • Engelsk
  • ISBN:
  • 9783031792885
  • Indbinding:
  • Paperback
  • Sideantal:
  • 168
  • Udgivet:
  • 21. november 2019
  • Størrelse:
  • 191x10x235 mm.
  • Vægt:
  • 327 g.
  • Ukendt - mangler pt..
Forlænget returret til d. 31. januar 2025
  •  

    Kan formentlig ikke leveres inden jul

Normalpris

Medlemspris

Prøv i 30 dage for 45 kr.
Herefter fra 79 kr./md. Ingen binding.

Beskrivelse af Multi-Armed Bandits

Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools¿Bayesian and frequentist¿of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.

Brugerbedømmelser af Multi-Armed Bandits



Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.