cover

Gamma, Erlang, Chi-square distributions... all the same beast

June 09, 2021 21 min read

Probably the most important distribution in the whole field of mathematical statistics is Gamma distribution. Its special cases arise in various branches of mathematics under different names - e.g. Erlang or Chi-square (and Weibull distribution is also strongly related) - but essentially are the same family of distribution, and this post is supposed to provide some intuition about them.

I’ll first explain how Gamma distribution relates to Poisson distribution using a practical example of calculation of a distributed file system crash probability. After that I’ll describe the special cases of Gamma distribution, providing statements of the problems, in which they occur.

Gamma and Poisson distributions: same process, different questions

Suppose you have an array of inexpensive unreliable hard drives like WD Caviar Green with probability of failure of each individual hard drive at any day of 10410^{-4}.

You are planning to build a distributed file system with 1000 disks like these, and you need to decide, whether you will use RAID5 or RAID6 array for it. RAID5 allows for a failure of 1 disk (so if 2 disks crash at the same day, your cluster is dead), while RAID6 allows for a simultaneous failure of 2 disks (so if 3 disks crash at the same day, your cluster is dead).

Poisson distribution

Let us calculate the probability of a failure of k=3 drives, given N=1000 disks total. This probability is approximated by Poisson distribution, which we will derive from binomial distribution:

p(k=3N=1000)=CNkpk(1p)kp(k=3 | N=1000) = C_{N}^{k} \cdot p^k \cdot (1-p)^k, where p=104p = 10^{-4}

First, let’s replace p with λ=Np\lambda = N \cdot p, where Lambda is a parameter of Poisson distribution. So, we get:

p(k=3N=1000)=N!k!(Nk)!λkNk(1λN)Nkp(k=3 | N=1000) = \frac{N!}{k! \cdot (N-k)!} \cdot \frac{\lambda^k}{N^k} \cdot (1-\frac{\lambda}{N})^{N-k}

We can apply some assumptions now. We know that k << N and p is small.

Now, let’s recall that (1λN)N=eλ(1 - \frac{\lambda}{N})^N = e^{-\lambda} and neglect (1λN)k(1 - \frac{\lambda}{N})^{-k} as it is close to 1. So we get:

p(k=3N=1000)=N!k!(Nk)!λkNkeλp(k=3 | N=1000) = \frac{N!}{k! \cdot (N-k)!} \cdot \frac{\lambda^k}{N^k} \cdot e^{-\lambda}.

Next let’s also neglect the difference between N(N1)...(Nk+1)N \cdot (N-1) \cdot ... \cdot (N-k+1) and NkN^k, so that N!(Nk)!Nk\frac{N!}{(N-k)!} \approx N^k and we can cancel it out, so that we get the final form of Poisson distribution:

p(k=3N=1000)=λkeλk!p(k=3 | N=1000) = \frac {\lambda^k \cdot e^{-\lambda}}{k!}

Hence, our RAID6 will go south due to a simultaneous failure of 3 hard drives in one day with probability: 0.13e0.13!=0.0010.90486=0.00015 \frac{0.1^3 \cdot e^{-0.1}}{3!} = \frac{0.001 * 0.9048}{6} = 0.00015.

Seems small, but probability of this happening in 5 years is around 25%: 1(10.00015)3655=0.240611 - (1 - 0.00015)^{365*5} = 0.24061. So the described cluster isn’t really viable.

Gamma distribution

We’ve already found out that 1000 Caviar Green HDDs is too much for RAID6. Now, we might want to ask a different (somewhat contrived in this case) question.

Suppose that we know that a RAID6 cluster went down due to a failure of 3 drives. What was the probability that there were 1000 (or 100 or 10000) hard drives in that cluster?

Note that in previous case the number of disks NN was a fixed parameter and number of crashes kk was a variable, and this time it’s the other way around: number of crashes kk is a fixed parameter, while the total number of disks NN is a variable. We’re solving a very similar problem, which the Bayesians would call conjugate.

NN can be interpreted as a stoppage time of a process until kk failures happen. For instance if you were dropping paratroopers one by one from a plane and each paratrooper had a probability of pp that his parachute doesn’t work, you might have asked yourself what would be the probability that you’d managed to drop 1000 troopers before the parachutes of 3 of them malfunctioned.

Let’s keep the notation, and show that the probability in question is described by Gamma distribution.

p(N=1000k=3)=CNkpk(1p)Nkn=kCnkpk(1p)nk=N!Nk!(1p)Nn=kn!nk!(1p)nNk(1p)Nn=knk(1p)n p(N=1000 | k=3) = \frac{C_N^k \cdot p^k \cdot (1-p)^{N-k}}{\sum\limits_{n=k}^{\infty}C_n^k \cdot p^k \cdot (1-p)^{n-k}} = \frac{\frac{N!}{N-k!} \cdot (1-p)^N }{ \sum \limits_{n=k}^{\infty} \frac{n!}{n-k!} \cdot (1-p)^n } \approx \frac{N^k \cdot (1-p)^N}{ \sum \limits_{n=k}^{\infty} n^k \cdot (1-p)^n }.

Now, recall that gamma function is defined as Γ(k+1)=n=0nkendn\Gamma(k+1) = \int \limits_{n=0}^{\infty} n^{k} e^{-n} dn, which suspiciously reminds the denominator of the probability above.

Let’s change the notation ln(1p)=1θ\ln(1-p) = -\frac{1}{\theta}, so that (1p)n=enθ(1 - p)^n = e^{-\frac{n}{\theta}}, and we almost get p(N=1000k=3)NkeNθΓ(k+1)p(N=1000 | k=3) \approx \frac{N^k \cdot e^{\frac{-N}{\theta}}}{\Gamma(k+1)}, which is very close to the definition of Gamma distribution.

I wasn’t accurate here with the transition from a sum to an integral, with sum/integral limits and with θk\theta^{-k} multiplier; however, it gives you an idea of what’s going on.

So, to make this argument more rigorous, instead of discrete values of NN, corresponding to the total number of hard drives or paratroopers, I’ll transition to a continuous real-valued NN (you may interpret it as continuous time).

As we know the cumulative probability function of our Gamma process from its Poisson counterpart Fξ(N)=p(ξN)=1x=0k1(λN)xeλNx!F_\xi(N) = p(\xi \leq N) = 1 - \sum \limits_{x=0}^{k-1} \frac{(\lambda N)^x e^{-\lambda N}}{x!}, we can infer probability density function from it. I’ll be following the general logic of this Aerin Kim’s post.

fξ(N)=Fξ(N)dN=d(1x=0k1(λN)xeλNx!)dN=x=0k1xλ(λN)x1eλNλ(λN)xeλNx!)=λeλN(1+x=1k1((λN)xx!(λN)x1(x1)!))=f_\xi(N) = \frac{\partial F_\xi(N)}{dN} = \frac{d(1 - \sum \limits_{x=0}^{k-1} \frac{(\lambda N)^x e^{-\lambda N}}{x!})}{dN} = \sum \limits_{x=0}^{k-1} \frac{x \cdot \lambda \cdot (\lambda N)^{x-1} \cdot e^{-\lambda N} - \lambda \cdot (\lambda N)^x \cdot e^{-\lambda N} }{x!}) = \lambda \cdot e^{-\lambda N} \cdot (1 + \sum \limits_{x=1}^{k-1}(\frac{(\lambda N)^x}{x!} - \frac{(\lambda N)^{x-1}}{(x-1)!})) =

=λeλN(1+(λN)k1(k1)!1)=λeλN(λN)k1(k1)!=λkNk1eλ(k1)! = \lambda \cdot e^{-\lambda N} \cdot (1 + \frac{(\lambda N)^{k-1}}{(k-1)!} - 1) = \frac{\lambda \cdot e^{-\lambda N} \cdot (\lambda N)^{k-1}}{(k-1)!} = \frac{\lambda^k \cdot N^{k-1} \cdot e^{-\lambda}}{(k-1)!}.

From Bayesian standpoint what we did here in order to calculate the Gamma distribution was p(Nk)=p(Nk)p(k)=p(kN)p(N)n=kNp(kn)p(n)=p(kN)n=kNp(kn)p(N|k) = \frac{p(N \cap k)}{p(k)} =\frac{p(k|N)\cancel{p(N)}}{\sum \limits_{n=k}^{N} p(k|n)\cancel{p(n)}} = \frac{p(k|N)}{\sum \limits_{n=k}^{N} p(k|n)} , where p(kn)p(k|n) is Poisson-distributed in case Np=λN \cdot p = \lambda is a small constant. Gamma distributions is called conjugate prior for Poisson. Bayesian approach might be harmful for you sanity, however, as there is no way to rationalize priors for p(N) and p(n) (assumed equal here).

Special cases of Gamma distribution

Erlang distribution

Erlang distribution is a special case of basically the same Gamma distribution that arises in mass service theory, with a special condition that k is an integer number (as in case we considered above).

Agner Krarup Erlang was a Danish mathematician and engineer, who used to work on mathematical models of reliability of service in the early days of telephony in the 1900-1910s. He was studying the probability of denial of service or poor quality of service for in case of a telephone network of limited bandwidth. He actually came up with multiple models, which I won’t consider in full here, but they are nicely described in this paper by Hideaki Takagi.

Suppose that you join a queue with 1 cashier, where k people are ahead of you. The time that is takes the cashier to service a single person is described by an exponential distribution, so that probability density function that describes the chance that servicing a single person takes tt time is: f(t)=αeαtf(t) = \alpha \cdot e^{-\alpha t}, and probability that the service would take time more than t equals p(Tt)=eαtp(T \leq t) = e^{-\alpha t}. What is the probability that you’ll be serviced in t minutes? This example is stolen from this post.

So, essentially we are dealing with a sum of independently identically distributed exponential variables, and as any sum of i.i.d. variables it will converge to normal distribution. But prior to that it will first converge to an Erlang/Gamma distribution.

To show that we’ll have to make use of convolutions. I prefer thinking of them in a discrete matrix form, but same logic can be applied in continuous case, replacing sum with integral.

What is the probability that service of 2 persons will take the cashier 5 minutes? It can be calculated as a sum of probabilities:

p(servicing 2 persons took 5 minutes) = p(person 1 took 1 minute) * p(person 2 took 4 minutes) + p(person 1 took 2 minutes) * p(person 2 took 3 minutes) + p(person 1 took 3 minutes) * p(person 2 took 2 minutes) + p(person 1 took 4 minutes) * p(person 2 took 1 minutes).

Or in a matrix notation:

[p(ξ1+ξ2=1)p(ξ1+ξ2=2)p(ξ1+ξ2=3)p(ξ1+ξ2=4)]=(p(ξ1=0)p(ξ1=1)p(ξ1=2)p(ξ1=3)p(ξ1=1)p(ξ1=0)p(ξ1=1)p(ξ1=2)p(ξ1=2)p(ξ1=1)p(ξ1=0)p(ξ1=1)p(ξ1=3)p(ξ1=2)p(ξ1=1)p(ξ1=0))[p(ξ2=1)p(ξ2=2)p(ξ2=3)p(ξ2=4)]=(0000p(ξ1=1)000p(ξ1=2)p(ξ1=1)00p(ξ1=3)p(ξ1=2)p(ξ1=1)0)[p(ξ2=1)p(ξ2=2)p(ξ2=3)p(ξ2=4)]\begin{bmatrix} p(\xi_1 + \xi_2=1) \\ p(\xi_1 + \xi_2=2) \\ p(\xi_1 + \xi_2=3) \\ p(\xi_1 + \xi_2=4) \\ \end{bmatrix} = \begin{pmatrix} p(\xi_1 = 0) & p(\xi_1 = -1) & p(\xi_1 = -2) & p(\xi_1 = -3) \\ p(\xi_1 = 1) & p(\xi_1 = 0) & p(\xi_1 = -1) & p(\xi_1 = -2) \\ p(\xi_1 = 2) & p(\xi_1 = 1) & p(\xi_1 = 0) & p(\xi_1 = -1) \\ p(\xi_1 = 3) & p(\xi_1 = 2) & p(\xi_1 = 1) & p(\xi_1 = 0) \\ \end{pmatrix} \cdot \begin{bmatrix} p(\xi_2=1) \\ p(\xi_2=2) \\ p(\xi_2=3) \\ p(\xi_2=4) \end{bmatrix} = \begin{pmatrix} 0 & 0 & 0 & 0 \\ p(\xi_1 = 1) & 0 & 0 & 0 \\ p(\xi_1 = 2) & p(\xi_1 = 1) & 0 & 0 \\ p(\xi_1 = 3) & p(\xi_1 = 2) & p(\xi_1 = 1) & 0 \\ \end{pmatrix} \cdot \begin{bmatrix} p(\xi_2=1) \\ p(\xi_2=2) \\ p(\xi_2=3) \\ p(\xi_2=4) \end{bmatrix}

Or to write it in a short formula:

p(ξ1+ξ2=t)=(pξpξ)(t)=s=0tpξ(ts)pξ(s)p(\xi_1 + \xi_2 = t) = (p_\xi * p_\xi)(t) = \sum \limits_{s=0}^{t} p_\xi(t-s) p_\xi(s)

Now, in continuous case instead of summing over the probabilities, we will be integrating over probability density functions (think of it as if your probability vectors and convolution matrices have become infinity-dimensional):

(ff)(t)=0tf(ts)f(s)ds(f * f) (t) = \int \limits_{0}^{t} f(t-s)f(s)ds

In case of f(t)=αeαtf(t) = \alpha \cdot e^{-\alpha t}, we get:

(ff)(t)=α2teαt(f * f) (t) = \alpha^2 t e^{-\alpha t}

(fff)(t)=α3t22eαt(f * f * f) (t) = \frac{\alpha^3 t^2}{2} e^{-\alpha t}

(ffff)(t)=α4t33!eαt(f * f * f * f) (t) = \frac{\alpha^4 t^3}{3!} e^{-\alpha t}

(f...f)(t)k times=αktk1(k1)!eαt \underbrace{(f * ... * f)(t)}_\text{k times} = \frac{\alpha^k t^{k-1}}{(k-1)!} e^{-\alpha t}

Now, if we recall that Γ(k)=(k1)!\Gamma(k) = (k-1)! for integer kk, N=tN = t and α=1θ\alpha = \frac{1}{\theta}, we’ve gotten a Gamma distribution from the previous paragraph.

Let us show that these formulae are correct by induction. First, let’s derive Erlang distribution with 1 degree of freedom:

(ff)(t)=0tf(ts)f(s)ds=0tαeα(ts)αeαsds=0tα2eαtds=α2teαt (f * f) (t) = \int \limits_{0}^{t} f(t-s)f(s)ds = \int \limits_{0}^{t} \alpha e^{-\alpha (t-s)} \alpha e^{-\alpha s} ds = \int \limits_{0}^{t} \alpha^2 e^{-\alpha t}ds = \alpha^2 t e^{-\alpha t}

Now let’s derive Erlang distribution with k+1 degrees of freedom from Erlang distribution with k degrees:

(f...f)(t)k+1 times=0tαksk1(k1)!eαsαeα(ts)ds=αk+1eαt(k1)!0tsk1ds=αk+1eαttkk!\underbrace{(f * ... * f)(t)}_\text{k+1 times} = \int \limits_{0}^{t} \frac{\alpha^k s^{k-1}}{(k-1)!} e^{-\alpha s} \alpha e^{-\alpha (t-s)} ds = \frac{\alpha^{k+1} e^{-\alpha t}}{(k-1)!} \int \limits_{0}^{t} s^{k-1} ds = \frac{ \alpha^{k+1} e^{-\alpha t} t^k }{k!}

Chi-square distribution

Chi-square distribution is ubiquitous in mathematical statistics, especially, in its applications to medicine and biology.

The most classical situation where it arises is when we are looking at the distribution of a sum of squares of gaussian random variables:

ξ12+ξ22+...+ξk2χ2\xi_1^2 + \xi_2^2 + ... + \xi_k^2 \sim \chi^2, where ξiN(0,1)\xi_i \sim \mathcal{N}(0, 1)

Probability density function of Chi-square with 1 degree of freedom

The easiest way to derive the distribution of chi-square is to take a Gaussian-distributed random variable ξi\xi_i and to show that ξi2\xi_i^2 is actually a Gamma-distributed random variable, very similar to Erlang distribution:

Fξ(x)=p(ξx)=12πt=xet22dtF_\xi(x) = p(\xi \leq \sqrt{x}) = \frac{1}{\sqrt{2\pi}} \int \limits_{t=-\infty}^{\sqrt{x}} e^{\frac{-t^2}{2}}dt

y(x)=x2y(x) = x^2, x(y)=±yx(y) = \pm \sqrt y

Fξ2(y)=p(ξ2y)=p(ξy)=x=abfξ(x)dx=x=abFξ(x(y))xxydy=y=y(x=a)y(x=b)fξ(x(y))dxdydy=12πt=0yet/212tdtF_{\xi^2}(y) = p(\xi^2 \leq y) = p(\xi \leq \sqrt y) = \int \limits_{x=a}^{b} f_\xi(x) dx = \int \limits_{x=a}^{b} \frac{\partial F_\xi(x(y))}{\partial x} \frac{\partial x}{\partial y} dy = \int \limits_{y=y(x=a)}^{y(x=b)} f_\xi(x(y)) \frac{dx}{dy} dy = \frac{1}{\sqrt{2\pi}} \int \limits_{t=0}^{y} e^{-t/2} \frac{1}{2 \sqrt t} dt

So if we recalled the pdf for Gamma distribution f(x;α,β)=βαxα1eβxΓ(α)f(x; \alpha, \beta) = \frac{\beta^\alpha x^{\alpha-1} e^{-\beta x}}{\Gamma(\alpha)} the threadbare χ2\chi^2 with 1 degree of freedom is nothing more than a Gamma distribution with α=1/2\alpha = 1/2 and β=1/2\beta = 1/2.

It is also very close to the Erlang distribution f(x;α,k)=αk+1eαxxkk!f(x; \alpha, k) = \frac{\alpha^{k+1} e^{-\alpha x} x^{k}}{k!}, but for Erlang the power kk can be integer only, and in Chi-square it is 12i\frac{1}{2} \cdot i, where ii is integer.

For reference, see this and this.

Probability density function for Chi-square with k degrees of freedom

Again, let us use the convolution formula:

(ff)(t)=0tf(ts)f(s)ds=0te(ts)12tses12sds=et0t14stsds(f * f)(t) = \int \limits_{0}^{t} f(t-s)f(s)ds = \int \limits_{0}^{t} e^{-(t-s)} \frac{1}{2 \sqrt{t-s}} e^{-s} \frac{1}{2 \sqrt{s}} ds = e^{-t} \int \limits_{0}^{t} \frac{1}{4 \sqrt{s} \sqrt{t-s}}ds.

Now we do a variable substitution s=tus = tu, ds=tduds = t du. Also note the corresponding change of integration upper limit to 1:

(ff)(t)=et01t4tut(1u)du=et4011u(1u)du(f * f)(t) = e^{-t} \int \limits_{0}^{1} \frac{t}{4 \cdot \sqrt{tu} \sqrt{t(1-u)} }du = \frac{e^{-t}}{4} \int \limits_{0}^{1} \frac{1}{\sqrt{u(1-u)} }du

Our integral now is well-known as Beta function B(12,12)B(\frac{1}{2}, \frac{1}{2}), another fun function, closely related to Gamma function. Integral of a Beta-function is a constant that can be expressed through Gamma function as B(α,β)=Γ(α)Γ(β)Γ(α+β)\Beta(\alpha, \beta) = \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}.

Now, in general case χ2\chi^2-distribution with k+1 degrees of freedom is:

(f...f)k+1 times(t)=0tfk(s)e(ts)2Γ(12)ds=0tsk/21es2k2Γ(k2)e(ts)(ts)122Γ(12)ds=et212Γ(12)0tsk/21(ts)122k2Γ(k2)ds \underbrace{(f * ... * f)}_\text{k+1 times}(t) = \int \limits_{0}^{t} f_{k}(s) \frac{e^{-(t-s)}}{\sqrt2 \cdot \Gamma(\frac{1}{2})} ds = \int \limits_{0}^{t} \frac{s^{k/2-1}e^{-s}}{2^{\frac{k}{2}} \Gamma(\frac{k}{2})} \frac{e^{-(t-s)}(t-s)^{-\frac{1}{2}}}{\sqrt2 \cdot \Gamma(\frac{1}{2})}ds = \frac{e^{-t}}{2^{\frac{1}{2}}\Gamma(\frac{1}{2})} \int \limits_{0}^{t} \frac{s^{k/2-1}(t-s)^{-\frac{1}{2}}}{2^{\frac{k}{2}} \Gamma(\frac{k}{2})} ds

Let’s do the substitution again: s=tus=tu, ds=tduds=t du:

et212Γ(12)0tsk/21(ts)122k2Γ(k2)ds=et212Γ(12)01(tu)k/21(ttu)122k2Γ(k2)tdu=ett(k+1)212k+1201uk/21(1u)12Γ(12)Γ(k2)du=ett(k+1)212k+12Γ(k+12)01Γ(k+12)Γ(12)Γ(k2)uk21(1u)12du\frac{e^{-t}}{ 2^{\frac{1}{2}}\Gamma(\frac{1}{2}) } \int \limits_{0}^{t} \frac{s^{k/2-1} (t-s)^{-\frac{1}{2}} }{2^{\frac{k}{2}} \Gamma(\frac{k}{2})} ds = \frac{e^{-t}}{ 2^{\frac{1}{2}}\Gamma(\frac{1}{2}) } \int \limits_{0}^{1} \frac{(tu)^{k/2-1} (t-tu)^{-\frac{1}{2}} }{2^{\frac{k}{2}} \Gamma(\frac{k}{2})} tdu = \frac{e^{-t} t^{\frac{(k+1)}{2} - 1} }{ 2^{\frac{k+1}{2}} } \int \limits_{0}^{1} \frac{u^{k/2-1} (1-u)^{-\frac{1}{2}}}{ \Gamma(\frac{1}{2}) \Gamma(\frac{k}{2})} du = \frac{e^{-t} t^{\frac{(k+1)}{2} - 1} }{ 2^{\frac{k+1}{2}} \Gamma(\frac{k+1}{2}) } \int \limits_{0}^{1} \frac{\Gamma(\frac{k+1}{2}) }{ \Gamma(\frac{1}{2}) \Gamma(\frac{k}{2})} u^{\frac{k}{2}-1} (1-u)^{-\frac{1}{2}} du

Notice that the expression under the integral is cumulative density function of Beta distribution, which equals 1.

Thus, fχk+12(t)=ett(k+1)212k+12Γ(k+12)f_{\chi_{k+1}^2}(t) = \frac{e^{-t} t^{\frac{(k+1)}{2} - 1} }{ 2^{\frac{k+1}{2}} \Gamma(\frac{k+1}{2}) }.

A helpful link with this proof is here.

Weibull distribution

If you take a look at Weibull distribution PDF/CDF, you’ll figure out that it is NOT a special case of Gamma distribution, because xx in exme^{-x^m} is taken to the power of m. However, it is a special case of Generalized Gamma Distribution family, so I decided to write about it here too.

Weibull distribution arises in the Extreme Value Theory and is known as Type III Extreme Value Distribution. There’s a theorem that states that a maximum of a set of i.i.d. normalized random variables can only converge to one of three types of distributions, and Weibull is one of them (the other two being Gumbel and Fréchet). Here is a nice introduction.

Waloddi Weibull was a Swedish engineer and mathematician, who started working in the field of strengths of materials, particle size upon grinding etc. in the 1930s and thoroughly described Weibull distribution in 1951, although its properties were studies much earlier by Frechet, Rosin & Rammler, von Mises, Fisher, Tippet and Gnedenko and others.

Unfortunately, the theoretical motivation for Weibull distribution does not exist, and the physical interpretation of k parameter is unclear. The only rationale that Weibull himself provides for the shape of his distribution is that (xx0)mx0\frac{(x-x_0)^m}{x_0} is the simplest function, satisfying the condition of being positive, non-decreasing and vanishing at xmx_m. I quote Weibull’s 1951 paper:

The objection has been stated that this distribution function has no theoretical basis. But in so far as the author understands, there are - with very few exceptions - the same objections against all other df, applied to real populations from natural biological fields, at .least in so far as the theoretical has anything to do with the population in question. Furthermore, it is utterly hopeless to expect a theoretical basis for distribution functions of random variables such as strength of materials or of machine parts or particle sizes, the “particles” being fly ash, Cyrtoideae, or even adult males, born in the British Isles.

Weibull distribution is often applied to the problem of strength of a chain. If you want to find out when a chain breaks, it breaks whenever any link of it breaks. If every link’s strength is Weibull-distributed, strength of the chain as a whole is Weibull-distributed, too. Let’s look at the cumulative distribution function.

Basically it says that p(chain link survives application of a force x\leq x) = exme^{-x^m}. So, the whole chain of k links survives with probability (exm)k=ekxm(e^{-x^m})^k = e^{-kx^m}, which is still Weibull-distributed.


Boris Burkov

Written by Boris Burkov who lives in Moscow, Russia, loves to take part in development of cutting-edge technologies, reflects on how the world works and admires the giants of the past. You can follow me in Telegram