# Bayes' rule — Odds form
We can reformulate [[Bayes' rule]] in terms of odds:
$
\textrm{Posterior odds} = \textrm{Relative likelihoods} \times \textrm{Prior odds}
$
$
\frac{P(H | e)}{P(\neg H | e)} = \frac{P(e | H)}{P(e | \neg H)} \times \frac{P(H)}{P(\neg H)}
$ ^aa9c13
This version has several merits: it is easier to compute mentally and to explain to laypeople (see [[Waterfall diagrams and relative odds]]).
For instance, let $H$ be the event “The patient is sick” and $e$ be “The patient tests positive”. Let us suppose that $P(H) = 0.001$, $P(\neg H) = 0.999$, $P(e | H) = 0.95$ and $P(e | \neg H) = 0.01$. Therefore, we have $\frac{P(H)}{P(\neg H)} \approx 0.001$ and $\frac{P(e | H)}{P(e | \neg H)} \approx 100$.
A doctor could then give this explanation to a positive patient: “A priori, a random patient is 1,000 times as likely to be healthy as to be sick. However, this test is only 100 times as likely to be positive for sick patients as for healthy patients. Therefore, you are now ten times as likely to be healthy as to be sick, which is still quite good!”
It also makes the statement that “[[Extraordinary claims require extraordinary evidence]]” easier to understand.
## Vector Form
The odds form can be generalized to a vector form:
$
\mathbb{O}(\mathbf{H} | e) = \mathcal{L}_e(\mathbf{H}) \times \mathbb{O}(\mathbf{H})
$
where $\mathbf{H} = [H_1, \dots, H_n]$ is a vector of hypotheses, $\mathbb{O}(\mathbf{H})$ is the vector of relative prior odds between the $H_i$, $\mathcal{L}_e(\mathbf{H})$ is the vector of relative likelihoods with which each $H_i$ predicted $e$ and $\mathbb{O}(\mathbf{H} | e)$ is the relative posterior odds between all the $H_i$.
---
## 📚 References
- [Introduction to Bayes' rule: Odds form](https://arbital.com/p/bayes_rule_odds/?l=1x8&pathId=77281)