In the previous chapter, the generated numbers have a chance of occurring evenly. But what if we want some numbers to occur more frequently than others? This idea of assigning weights to some numbers more than others is called a probability distribution.
Probability distributions are often used in simulations of real-life phenomena!
The probability mass function (PMF) for a discrete value describes the probabilities for each event occurring. For instance, let us create a probability table for likelihood of a dice roll, where x is the outcome of the dice roll.
- Since the probability of any given value stays constant for this particular scenario, the PDF stays the same at 1/6 for each possible outcome.
- Notice that there are only 6 possible outcomes, thus this is a kind of discrete variable!
Additionally, the cumulative distributive function (CDF) for a discrete value measures the probability that a chosen value or values less than the chosen value occurs in the sample distribution. Adding on to the table earlier, we get:
From a visual perspective, the CDF is useful as it allows us to easily find outliers(1) or clusters(2) of data.
The outliers are more easily visible as they cluster at the top of the CDF
Continuous Random Variables (CRV)
Another kind of variable, known as a continuous random variable, which can take on infinitely many values, between a specified maximum and minimum.
- Eg. time taken to travel from one place to another is a continuous variable as it can take on infinite possible duration of times (seconds, milliseconds, so on.)
For any continuous random variable, which can take on an infinite range of values, the probability that a generated random number converges to a specific value is 0.1 Hence, the probability density function (PDF) for a continuous random variable* gives the probability of a random variable falling within a particular range of values in the sample distribution.
This is shown as the area under the continuous variable’s probability density function, as follows:
- is the integral or area from to
- integrate with respect to
- is the probability density function
1 where represents the probability density function.
Meanwhile, the Cumulative Distribution Function (CDF) of a continuous random variable is a function derived from the probability density function for a continuous variable (as shown above). The cumulative distribution function is defined for any random variable as
which is the probability that any random variable is less than or equal to
How is PMF&CDF related for DRV and PDF&CDF for CRV?
For a continuous random variable (CRV)
If we replace the equation for the PDF of a CRV with the one above, we get :
This means that the definite integral (for a continuous random variable) probability density function is the same as the differences of the cumulative distributive functions.
For a DRV,
The value of (aka the CDF) increments at specific intervals2 equivalent to (aka the PMF) at all possible values of X.
2 Recall in the probability table earlier for a dice 🎲 that for a DRV, the CRV increments at 1/6 or the individual probabilities
Wait, why is the P(X=x) called PMF and not PDF for a discrete random variable?
Probability Mass Function (PMF) is often confused with Probability Density Function(PDF). We will see why PDF is usually not used to describe DRV below!
First we take a look at the distribution for a continuous random variable. The units of the y-axis is a kind of “probability per unit length” as it is the output of the probability density function.
The area between a and b is simply the integral of the probability density function.
For a discrete random variable, the units of the y-axis is simply a probability.
The area between a and b is taken by adding up individual probabilities for values between a and b, which is like integrating the probabilities over a region x.
We thus term this function (“finding area of the probability”) as a probability mass function instead, similar to how integrating density gives rise to a mass in physics
The linear problem: A Thought Experiment 🤔
Sometimes, not all random values have an equal chance of being chosen.
Imagine for instance if we were to attempt to simulate the potential of Covid-19 variant to spread through a city population.
Modelling complex daily interactions has to take into account multiple random numbers simulating the timings or places where different individuals interact.
Adding effects of pandemic public health measures – 💉😷🧍▫️▫️🧍 will further adjust the distribution of probabilities for random numbers, which will affect the random numbers generated as time progresses in the simulation. 2
One method to generate random number samples that reflects the underlying distribution is to use inverse transform sampling (ITS). ITS is a method to generate samples at random from any probability distribution given its cumulative distribution function.
How ITS works
Continuing from our earlier example of simulating the spread of Covid-19. Let’s model the number of people who visit the MRT on a daily basis. We can model the daily number using a Poisson Distribution. The poisson distribution expresses probability of X no. of events occuring in a given duration if these events occur with (1) a known constant rate (eg. frequency) and (2) each event is independent of the previous one.
1 While the second assumption is likely to be incorrect, we will use it as a scenario for now.
Let’s visualize how ITS works!
First, we plot the equation of the Poisson distribution in blue – which is given by:
- is the expected integer number of occurrences
- is the average number of events per interval
Next, we plot the continuous distribution function of the Poisson distribution in green:
is the floor function of k
- The CDF is not a smooth curve as it takes on only integer values (desmos calculates only individual points for this function).
- Try drawing a horizontal line (y = ?) to find the intersection for the different random numbers possible on the axis
- There is a higher probability for getting values
0 < x < 1compared to values
x > 3(for this lambda value) as the probabilities are very slim
- This is shown in the image below:
How can we make random numbers given a certain probability using this distribution? 🤔
I know, we can simply “swap” X and Y in the CDF function above, by performing the inverse. This makes the probability the independent variable and a random number the output (dependent variable).
Drawbacks of this method
However, notice that if we reduce the value of λ, the values become more clustered. In order to get an accurate average, we are forced to take more samples, which will take more time ⏱️ (which is computationally intensive)
Other methods: The Acceptance Rejection Sampling method
This method comes from a broader class of methods grouped under the Monte Carlo Simulations.
Imagine wanting to find the radius of a circle. We get a perfectly circular dish with radius X cm and square with sides length X cm. We lay these two dishes a distance away from each other, and begin by randomly depositing marbles over the areas of similar mass into the dishes. By the end of the experiment, we can take the value of
This example can be seen at https://www.youtube.com/watch?v=7ESK5SaP-bc
Similarly, the Acceptance Rejection Sampling Method replaces the dishes with the probability density function of a variable, and samples uniformly within the maximum boundaries of the graph. It rejects values that fall outside of the distribution, and returns x-values which fall within the graph.
Intuitively, one realises that if the distribution is highly limited or concentrated in area, there may a lot of unwanted values during sampling.
Main Ideas of this chapter!
- random numbers generated from random variables fall into 2 types – discrete and continuous
- we can use functions such as Probability density function and cumulative distributive functions to analyse characteristics of how these random variables are related to their distributions
- inverse transform sampling and monte carlo methods (eg. ARS) are one possible way to generate numbers that follow a certain distribution