Dalam estimasi kemungkinan maksimum, Anda mencoba memaksimalkan ; Namun, memaksimalkan ini sama dengan memaksimalkan p x ( 1 - p ) n - x untuk fixnCx px(1−p)n−xpx(1−p)n−xx.
Actually, the likelihood for the gaussian and poisson also do not involve their leading constants, so this case is just like those as w
Addressing OPs Comment
Here is a bit more detail:
First, x is the total number of successes whereas xi is a single trial (0 or 1). Therefore:
∏i=1npxi(1−p)1−xi=p∑n1xi(1−p)∑n11−xi=px(1−p)n−x
That shows how you get the factors in the likelihood (by running the above steps backwards).
Why does the constant go away? Informally, and what most people do (including me), is just notice that the leading constant does not affect the value of p that maximizes the likelihood, so we just ignore it (effectively set it to 1).
We can derive this by taking the log of the likelihood function and finding where its derivative is zero:
ln(nCx px(1−p)n−x)=ln(nCx)+xln(p)+(n−x)ln(1−p)
Take derivative wrt p and set to 0:
ddpln(nCx)+xln(p)+(n−x)ln(1−p)=xp−n−x1−p=0
⟹nx=1p⟹p=xn
Notice that the leading constant dropped out of the calculation of the MLE.
More philosophically, a likelihood is only meaningful for inference up to a multiplying constant, such that if we have two likelihood functions L1,L2 and L1=kL2, then they are inferentially equivalent. This is called the Law of Likelihood. Therefore, if we are comparing different values of p using the same likelihood function, the leading term becomes irrelevant.
At a practical level, inference using the likelihood function is actually based on the likelihood ratio, not the absolute value of the likelihood. This is due to the asymptotic theory of likelihood ratios (which are asymptotically chi-square -- subject to certain regularity conditions that are often appropriate). Likelihood ratio tests are favored due to the Neyman-Pearson Lemma. Therefore, when we attempt to test two simple hypotheses, we will take the ratio and the common leading factor will cancel.
NOTE: This will not happen if you were comparing two different models, say a binomial and a poisson. In that case, the constants are important.
Of the above reasons, the first (irrelevance to finding the maximizer of L) most directly answers your question.