Less unsymmetric difference measure for probabilities than KL divergence?












1












$begingroup$


I know about the Kullback Leibler divergence and that it can be used to measure the difference between two probability distributions.



But it is not very symmetric. For example watching P from Q, if $q$ ever becomes zero:



$$p(x)logleft(frac{p(x)}{q(x)}right), q(x)=0, p(x)neq 0$$



Will of course be infinite. This is reasonable in the sense that if an event is impossible in $q$ but not in $p$ then it is impossible to "repair" in some sense. But this never happens from the other "view":



$$q(x)logleft(frac{q(x)}{p(x)}right), q(x)=0,p(x)neq 0$$



We can convince ourselves (probably by investigating some limit) this should not be considered infinite.



So how can we build a less unsymmetric distance measure?










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    I know about the Kullback Leibler divergence and that it can be used to measure the difference between two probability distributions.



    But it is not very symmetric. For example watching P from Q, if $q$ ever becomes zero:



    $$p(x)logleft(frac{p(x)}{q(x)}right), q(x)=0, p(x)neq 0$$



    Will of course be infinite. This is reasonable in the sense that if an event is impossible in $q$ but not in $p$ then it is impossible to "repair" in some sense. But this never happens from the other "view":



    $$q(x)logleft(frac{q(x)}{p(x)}right), q(x)=0,p(x)neq 0$$



    We can convince ourselves (probably by investigating some limit) this should not be considered infinite.



    So how can we build a less unsymmetric distance measure?










    share|cite|improve this question











    $endgroup$















      1












      1








      1


      2



      $begingroup$


      I know about the Kullback Leibler divergence and that it can be used to measure the difference between two probability distributions.



      But it is not very symmetric. For example watching P from Q, if $q$ ever becomes zero:



      $$p(x)logleft(frac{p(x)}{q(x)}right), q(x)=0, p(x)neq 0$$



      Will of course be infinite. This is reasonable in the sense that if an event is impossible in $q$ but not in $p$ then it is impossible to "repair" in some sense. But this never happens from the other "view":



      $$q(x)logleft(frac{q(x)}{p(x)}right), q(x)=0,p(x)neq 0$$



      We can convince ourselves (probably by investigating some limit) this should not be considered infinite.



      So how can we build a less unsymmetric distance measure?










      share|cite|improve this question











      $endgroup$




      I know about the Kullback Leibler divergence and that it can be used to measure the difference between two probability distributions.



      But it is not very symmetric. For example watching P from Q, if $q$ ever becomes zero:



      $$p(x)logleft(frac{p(x)}{q(x)}right), q(x)=0, p(x)neq 0$$



      Will of course be infinite. This is reasonable in the sense that if an event is impossible in $q$ but not in $p$ then it is impossible to "repair" in some sense. But this never happens from the other "view":



      $$q(x)logleft(frac{q(x)}{p(x)}right), q(x)=0,p(x)neq 0$$



      We can convince ourselves (probably by investigating some limit) this should not be considered infinite.



      So how can we build a less unsymmetric distance measure?







      calculus probability analysis statistics information-theory






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Feb 28 '18 at 11:11







      mathreadler

















      asked Dec 31 '17 at 17:44









      mathreadlermathreadler

      14.8k72160




      14.8k72160






















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          There are a lot of these, you can see some examples here.



          The ones I feel like come up the most often are:




          • Hellinger Distance

          • Total Variation Distance

          • Wasserstein Distance


          Though, there are plenty of them.



          There's a lot of ways one could approach the idea of a "distance on probability distributions". For instance, the Total Variation Distance defined on $mathcal{P}times mathcal{P}$, the product space of probability distributions defined on the same measurable space $(Omega,mathcal{B})$, is denoted as: $TV(p_1,p_2) = sup_{B in mathcal{B}}|p_1[B] - p_2[B]|$ the largest gap in probabilities assigned to sets in the shared $sigma-$algebra.



          That is kind of an abstracted notion of distance, however, if $p_1$ and $p_2$ have densities $f_1$ and $f_2$, then:
          begin{equation}
          TV(p_1,p_2) = ||f_1 - f_2||_{L_1} = int_{xinmathcal{X}}|f_1(x) - f_2(x)|dx
          end{equation}



          Which is usually more convenient to work with.



          Similarly, the Wasserstein distance turns out to be useful for different settings as well. You can read more about it in that link, but generally Wasserstein distance on continuous spaces is kind of abstracted and unwieldy. However, computing Wasserstein distance on discrete spaces reduces to solving an integer/linear program. A lot of research (especially in computer imaging) goes into framing these problems into programs or other optimization problems.



          See here, here. It's also used to show consistency in convergence of probability measures in statistical applications, for example here.



          I guess a last point would be that just because some Divergences are not symmetric does not mean that they can't sometimes be stronger than symmetric distance metrics. For example if $KL(P_n||Q)overset{nrightarrowinfty}{rightarrow}0$ then $P_noverset{T.V.}{rightarrow}Q$






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Nice one! I know Wasserstein as Earth Mover Distance in information theory and Total Variation was very popular all over image analysis & computer vision 5-10 years ago. But I'm pretty sure I haven't seen Hellinger yet.
            $endgroup$
            – mathreadler
            Dec 31 '17 at 22:37











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2586842%2fless-unsymmetric-difference-measure-for-probabilities-than-kl-divergence%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          There are a lot of these, you can see some examples here.



          The ones I feel like come up the most often are:




          • Hellinger Distance

          • Total Variation Distance

          • Wasserstein Distance


          Though, there are plenty of them.



          There's a lot of ways one could approach the idea of a "distance on probability distributions". For instance, the Total Variation Distance defined on $mathcal{P}times mathcal{P}$, the product space of probability distributions defined on the same measurable space $(Omega,mathcal{B})$, is denoted as: $TV(p_1,p_2) = sup_{B in mathcal{B}}|p_1[B] - p_2[B]|$ the largest gap in probabilities assigned to sets in the shared $sigma-$algebra.



          That is kind of an abstracted notion of distance, however, if $p_1$ and $p_2$ have densities $f_1$ and $f_2$, then:
          begin{equation}
          TV(p_1,p_2) = ||f_1 - f_2||_{L_1} = int_{xinmathcal{X}}|f_1(x) - f_2(x)|dx
          end{equation}



          Which is usually more convenient to work with.



          Similarly, the Wasserstein distance turns out to be useful for different settings as well. You can read more about it in that link, but generally Wasserstein distance on continuous spaces is kind of abstracted and unwieldy. However, computing Wasserstein distance on discrete spaces reduces to solving an integer/linear program. A lot of research (especially in computer imaging) goes into framing these problems into programs or other optimization problems.



          See here, here. It's also used to show consistency in convergence of probability measures in statistical applications, for example here.



          I guess a last point would be that just because some Divergences are not symmetric does not mean that they can't sometimes be stronger than symmetric distance metrics. For example if $KL(P_n||Q)overset{nrightarrowinfty}{rightarrow}0$ then $P_noverset{T.V.}{rightarrow}Q$






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Nice one! I know Wasserstein as Earth Mover Distance in information theory and Total Variation was very popular all over image analysis & computer vision 5-10 years ago. But I'm pretty sure I haven't seen Hellinger yet.
            $endgroup$
            – mathreadler
            Dec 31 '17 at 22:37
















          2












          $begingroup$

          There are a lot of these, you can see some examples here.



          The ones I feel like come up the most often are:




          • Hellinger Distance

          • Total Variation Distance

          • Wasserstein Distance


          Though, there are plenty of them.



          There's a lot of ways one could approach the idea of a "distance on probability distributions". For instance, the Total Variation Distance defined on $mathcal{P}times mathcal{P}$, the product space of probability distributions defined on the same measurable space $(Omega,mathcal{B})$, is denoted as: $TV(p_1,p_2) = sup_{B in mathcal{B}}|p_1[B] - p_2[B]|$ the largest gap in probabilities assigned to sets in the shared $sigma-$algebra.



          That is kind of an abstracted notion of distance, however, if $p_1$ and $p_2$ have densities $f_1$ and $f_2$, then:
          begin{equation}
          TV(p_1,p_2) = ||f_1 - f_2||_{L_1} = int_{xinmathcal{X}}|f_1(x) - f_2(x)|dx
          end{equation}



          Which is usually more convenient to work with.



          Similarly, the Wasserstein distance turns out to be useful for different settings as well. You can read more about it in that link, but generally Wasserstein distance on continuous spaces is kind of abstracted and unwieldy. However, computing Wasserstein distance on discrete spaces reduces to solving an integer/linear program. A lot of research (especially in computer imaging) goes into framing these problems into programs or other optimization problems.



          See here, here. It's also used to show consistency in convergence of probability measures in statistical applications, for example here.



          I guess a last point would be that just because some Divergences are not symmetric does not mean that they can't sometimes be stronger than symmetric distance metrics. For example if $KL(P_n||Q)overset{nrightarrowinfty}{rightarrow}0$ then $P_noverset{T.V.}{rightarrow}Q$






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Nice one! I know Wasserstein as Earth Mover Distance in information theory and Total Variation was very popular all over image analysis & computer vision 5-10 years ago. But I'm pretty sure I haven't seen Hellinger yet.
            $endgroup$
            – mathreadler
            Dec 31 '17 at 22:37














          2












          2








          2





          $begingroup$

          There are a lot of these, you can see some examples here.



          The ones I feel like come up the most often are:




          • Hellinger Distance

          • Total Variation Distance

          • Wasserstein Distance


          Though, there are plenty of them.



          There's a lot of ways one could approach the idea of a "distance on probability distributions". For instance, the Total Variation Distance defined on $mathcal{P}times mathcal{P}$, the product space of probability distributions defined on the same measurable space $(Omega,mathcal{B})$, is denoted as: $TV(p_1,p_2) = sup_{B in mathcal{B}}|p_1[B] - p_2[B]|$ the largest gap in probabilities assigned to sets in the shared $sigma-$algebra.



          That is kind of an abstracted notion of distance, however, if $p_1$ and $p_2$ have densities $f_1$ and $f_2$, then:
          begin{equation}
          TV(p_1,p_2) = ||f_1 - f_2||_{L_1} = int_{xinmathcal{X}}|f_1(x) - f_2(x)|dx
          end{equation}



          Which is usually more convenient to work with.



          Similarly, the Wasserstein distance turns out to be useful for different settings as well. You can read more about it in that link, but generally Wasserstein distance on continuous spaces is kind of abstracted and unwieldy. However, computing Wasserstein distance on discrete spaces reduces to solving an integer/linear program. A lot of research (especially in computer imaging) goes into framing these problems into programs or other optimization problems.



          See here, here. It's also used to show consistency in convergence of probability measures in statistical applications, for example here.



          I guess a last point would be that just because some Divergences are not symmetric does not mean that they can't sometimes be stronger than symmetric distance metrics. For example if $KL(P_n||Q)overset{nrightarrowinfty}{rightarrow}0$ then $P_noverset{T.V.}{rightarrow}Q$






          share|cite|improve this answer











          $endgroup$



          There are a lot of these, you can see some examples here.



          The ones I feel like come up the most often are:




          • Hellinger Distance

          • Total Variation Distance

          • Wasserstein Distance


          Though, there are plenty of them.



          There's a lot of ways one could approach the idea of a "distance on probability distributions". For instance, the Total Variation Distance defined on $mathcal{P}times mathcal{P}$, the product space of probability distributions defined on the same measurable space $(Omega,mathcal{B})$, is denoted as: $TV(p_1,p_2) = sup_{B in mathcal{B}}|p_1[B] - p_2[B]|$ the largest gap in probabilities assigned to sets in the shared $sigma-$algebra.



          That is kind of an abstracted notion of distance, however, if $p_1$ and $p_2$ have densities $f_1$ and $f_2$, then:
          begin{equation}
          TV(p_1,p_2) = ||f_1 - f_2||_{L_1} = int_{xinmathcal{X}}|f_1(x) - f_2(x)|dx
          end{equation}



          Which is usually more convenient to work with.



          Similarly, the Wasserstein distance turns out to be useful for different settings as well. You can read more about it in that link, but generally Wasserstein distance on continuous spaces is kind of abstracted and unwieldy. However, computing Wasserstein distance on discrete spaces reduces to solving an integer/linear program. A lot of research (especially in computer imaging) goes into framing these problems into programs or other optimization problems.



          See here, here. It's also used to show consistency in convergence of probability measures in statistical applications, for example here.



          I guess a last point would be that just because some Divergences are not symmetric does not mean that they can't sometimes be stronger than symmetric distance metrics. For example if $KL(P_n||Q)overset{nrightarrowinfty}{rightarrow}0$ then $P_noverset{T.V.}{rightarrow}Q$







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 3 '18 at 11:54









          mathreadler

          14.8k72160




          14.8k72160










          answered Dec 31 '17 at 21:22









          Ryan WarnickRyan Warnick

          1,29168




          1,29168












          • $begingroup$
            Nice one! I know Wasserstein as Earth Mover Distance in information theory and Total Variation was very popular all over image analysis & computer vision 5-10 years ago. But I'm pretty sure I haven't seen Hellinger yet.
            $endgroup$
            – mathreadler
            Dec 31 '17 at 22:37


















          • $begingroup$
            Nice one! I know Wasserstein as Earth Mover Distance in information theory and Total Variation was very popular all over image analysis & computer vision 5-10 years ago. But I'm pretty sure I haven't seen Hellinger yet.
            $endgroup$
            – mathreadler
            Dec 31 '17 at 22:37
















          $begingroup$
          Nice one! I know Wasserstein as Earth Mover Distance in information theory and Total Variation was very popular all over image analysis & computer vision 5-10 years ago. But I'm pretty sure I haven't seen Hellinger yet.
          $endgroup$
          – mathreadler
          Dec 31 '17 at 22:37




          $begingroup$
          Nice one! I know Wasserstein as Earth Mover Distance in information theory and Total Variation was very popular all over image analysis & computer vision 5-10 years ago. But I'm pretty sure I haven't seen Hellinger yet.
          $endgroup$
          – mathreadler
          Dec 31 '17 at 22:37


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2586842%2fless-unsymmetric-difference-measure-for-probabilities-than-kl-divergence%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Plaza Victoria

          Puebla de Zaragoza

          Musa