So far in this course, we have looked at individual singular events and unions of multiple singular events. This topic explores when we have an event which is linked to previous events before it. We define this as an event with the condition of another event - "probability of E, given F":
P(E∣F)=P(F)P(E∩F)
Independence
Two events are said to be independent if the following holds:
P(E∩F)=P(E)P(F)
This means that the probability of either event occurring does not affect the other. Events being independent is generally not equivalent to events being mutually exclusive.
This also affects conditional probability calculations, if E and F are independent and E,,/=,∅ then:
P(E∣F)=P(F)P(E∩F)=P(E)
Sequence of events
For any given sequence of events, we can say they are independent only if every ordered subset is proven independent using the equality above. This is defined formally as:
A sequence E1,E2,...,En is said to be independent if for every i1<i2<...<ir such that ij∈1,2,...,n and 1≤r≤n, we have:
P(Ei1∩Ei2∩...∩Eir)=j=1∏rP(Eij)
Note that pairwise independence does not imply independence, the whole set together must also be independent.
P(E1∩E2∩...∩En)=i=1∏nP(Ei)
This can also be written as a multiplication rule for probabilities which reduces to the above when the set is independent: