Jika saya memperbaiki nilai-nilai node yang diamati dari MRF, apakah itu menjadi CRF?
Jika saya memperbaiki nilai-nilai node yang diamati dari MRF, apakah itu menjadi CRF?
Jawaban:
Oke, saya menemukan jawabannya sendiri:
Conditinal Random Fields (CRFs) adalah kasus khusus Markov Random Fields (MRFs).
1.5.4 Bidang Acak Bersyarat
Conditional Random Field (CRF) adalah bentuk MRF yang mendefinisikan posterior untuk variabel x data yang diberikan z, seperti halnya MRF tersembunyi di atas. Berbeda dengan MRF tersembunyi, bagaimanapun, faktorisasi ke dalam distribusi data P (x | z) dan P sebelumnya (x) tidak dibuat eksplisit [288]. Ini memungkinkan dependensi kompleks x pada z ditulis langsung dalam distribusi posterior, tanpa faktorisasi yang dibuat eksplisit. (Mengingat P (x | z), faktorisasi seperti itu selalu ada, namun — banyak dari mereka, pada kenyataannya — sehingga tidak ada saran bahwa CRF lebih umum daripada MRF yang tersembunyi, hanya saja mungkin lebih mudah untuk berurusan dengan .)
Sumber: Blake, Kohli dan Rother: Markov bidang acak untuk penglihatan dan pemrosesan gambar. 2011
Bidang acak bersyarat atau CRF (Lafferty et al. 2001), kadang-kadang bidang acak diskriminatif (Kumar dan Hebert 2003), hanyalah versi MRF di mana semua potensi klik dikondisikan pada fitur input: [...]
Keuntungan dari CRF dibandingkan MRF adalah analog dengan keuntungan dari classifier diskriminatif atas classifier generatif (lihat Bagian 8.6), yaitu, kita tidak perlu “membuang-buang sumber daya” memodelkan hal-hal yang selalu kita amati. [...]
Kerugian CRF dibandingkan MRF adalah mereka membutuhkan data pelatihan berlabel, dan mereka lebih lambat melatih [...]
Sumber: Kevin P. Murphy: Pembelajaran Mesin: Perspektif Probabilistik
Menjawab pertanyaan saya:
Jika saya memperbaiki nilai-nilai node yang diamati dari MRF, apakah itu menjadi CRF?
Iya. Memperbaiki nilai-nilai sama dengan mengkondisikannya. Namun, Anda harus perhatikan bahwa ada perbedaan dalam pelatihan juga.
Menyaksikan banyak ceramah tentang PGM (model grafis probabilistik) tentang coursera banyak membantu saya.
MRF vs Bayes nets : Secara tidak tepat (tetapi biasanya) , ada dua jenis model grafis: model grafis tidak langsung dan model grafis terarah (satu jenis lagi, misalnya Tanner graph). Yang pertama juga dikenal sebagai Markov Random Fields / jaringan Markov dan kemudian jaringan Bayes / Bayesian. (Kadang-kadang asumsi independensi dalam keduanya dapat diwakili oleh grafik chordal)
Markov menyiratkan cara faktorisasi dan bidang acak berarti distribusi tertentu di antara yang ditentukan oleh model yang tidak diarahkan.
CRF MRF : Ketika beberapa variabel diamati, kita dapat menggunakan representasi grafik yang tidak terarah yang sama (seperti grafik yang tidak diarahkan) dan parameterisasi untuk menyandikan distribusi bersyarat where is a set of target variables and is a (disjoint) set of observed variables.
And the only difference lies in that for a standard Markov network the normalization term sums over X and Y but for CRF the term sums over only Y.
Reference:
Let's contrast conditional inference under MRFs with modeling using a CRF, settling on definitions along the way, and then address the original question.
A Markov Random Field (MRF) with respect to a graph is
Since an MRF represents a joint distribution over many variables that obeys Markov constraints, then we can compute conditional probability distributions given observed values of some variables.
For example, if I have a joint distribution over four random variables: IsRaining, SprinklerOn, SidewalkWet, and GrassWet, then on Monday I might want to infer the joint probability distribution over IsRaining and SprinklerOn given that I have observed SidewalkWet=False and GrassWet=True. On Tuesday, I might want to infer the joint probability distribution over IsRaining and SprinklerOn given that I have observed SidewalkWet=True and GrassWet=True.
In other words, we can use the same MRF model to make inferences in these two different situations, but we wouldn't say that we've changed the model. In fact, although we observed SidewalkWet and GrassWet in both cases described here, the MRF itself doesn't have "observed variables" per se---all variables have the same status in the eyes of the MRF, so the MRF also models, e.g., the joint distribution of SidewalkWet and GrassWet.
In contrast, we can define a Conditional [Markov] Random Field (CRF) with respect to a graph as
For both MRFs and CRFs, we typically fit a model that we can then use for conditional inference in diverse settings (as in the rain example above). However, while the MRF has no consistently designated "observed variables" and needs a joint distribution over all variables that adheres to the Markov constraints of , a CRF:
designates a subset of variables as "observed"
only defines a conditional distribution on non-observed given observed variables; it does not model the probability of the observed variables (if distributions are expressed in terms of parameters, this is often seen as a benefit since parameters are not wasted in explaining the probability of things that will always be known)
needs only obey Markov constraints with respect to the unobserved variables (i.e. the distribution over unobserved variables can depend arbitrarily on the observed variables while inference is at least as tractable as for the MRF on )
Since a CRF does not need to obey Markov constraints on the observed variables , these are typically not even shown in graphical representations of a CRF (possibly a point of confusion sometimes). Instead, the CRF on is defined as an MRF on a graph where nodes are only included for the s and where the parameters of the joint distribution of s are functions of the s, thus conditionally defining a distribution of s given the s.
As a final example, the following linear-chain MRF would indicate that all of the variables are conditionally independent of given a known value of :
In contrast, a CRF defined on the same with the same designation of s as being always observed, would allow for distributions of the s that depend arbitrarily on any of the s.
So, although ("yes") the conditional distribution of a MRF on given designated observed variables can be considered to be a CRF with respect to (since it defines a conditional distribution that obeys the Markov constraints of ), it is somewhat degenerate, and does not achieve the generality of CRFs on . Instead, the appropriate recipe would be, given an MRF on , define an MRF on the non-observed subset of with parameters of the MRF expressed as the output of parameterized functions of the observed variables, training the function parameters to maximize the likelihood of the resulting conditional MRFs on labeled data.
In addition to the potential savings of model paramters, increased expressiveness of conditional model, and retention of inference efficiency, a final important point about the CRF recipe is that, for discrete models (and a large subset of non-discrete models), despite the expressiveness of the CRF family, the log-likelihood can be expressed as a convex function of the function parameters allowing for global optimization with gradient descent.
See also: the original crf paper and this tutorial