How to minimize sum of matrix-convolutions?












2












$begingroup$


Given $A$, what should be B so that



$lVert I circledast A - I circledast B rVert _2$



is minimal for any $I$?





  • $I in mathbb{R}^{20x20}, A in mathbb{R}^{5x5}, B in mathbb{R}^{3x3}. $ Note that $B$ is smaller than $A$.

  • I $circledast$ K is a convolution on $I$ with kernel $K$. The result is padded with zeros as to match the shape of input $I$, this means that $(I circledast K ) in mathbb{R}^{20x20}$.


Does a closed form exist? Do I need to use a Fourier transform? As an extension, how does this work when $A,B$ are not square, and can be of arbitrary size (instead of the special case here where $A$ is larger than $B$?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Interesting problem. In other words, you're looking for a smaller filter that works as close as possible to the bigger one. In that case, maybe $B$ should be padded so that its center matches $A$'s. The convolution operator is linear i.e. $Icircledast A - Icircledast B = Icircledast (A-B)$. You can apply Fourier transform, and make use of Parseval's theorem, but from there I don't know how to proceed...
    $endgroup$
    – syockit
    Dec 12 '18 at 15:59










  • $begingroup$
    Yeah I think so too! I built a small NN that optimized the objective function and it always learnt to match the center of A so I guess that must be it. I still can't prove it though.
    $endgroup$
    – Frederik Bode
    Dec 13 '18 at 18:15
















2












$begingroup$


Given $A$, what should be B so that



$lVert I circledast A - I circledast B rVert _2$



is minimal for any $I$?





  • $I in mathbb{R}^{20x20}, A in mathbb{R}^{5x5}, B in mathbb{R}^{3x3}. $ Note that $B$ is smaller than $A$.

  • I $circledast$ K is a convolution on $I$ with kernel $K$. The result is padded with zeros as to match the shape of input $I$, this means that $(I circledast K ) in mathbb{R}^{20x20}$.


Does a closed form exist? Do I need to use a Fourier transform? As an extension, how does this work when $A,B$ are not square, and can be of arbitrary size (instead of the special case here where $A$ is larger than $B$?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Interesting problem. In other words, you're looking for a smaller filter that works as close as possible to the bigger one. In that case, maybe $B$ should be padded so that its center matches $A$'s. The convolution operator is linear i.e. $Icircledast A - Icircledast B = Icircledast (A-B)$. You can apply Fourier transform, and make use of Parseval's theorem, but from there I don't know how to proceed...
    $endgroup$
    – syockit
    Dec 12 '18 at 15:59










  • $begingroup$
    Yeah I think so too! I built a small NN that optimized the objective function and it always learnt to match the center of A so I guess that must be it. I still can't prove it though.
    $endgroup$
    – Frederik Bode
    Dec 13 '18 at 18:15














2












2








2


1



$begingroup$


Given $A$, what should be B so that



$lVert I circledast A - I circledast B rVert _2$



is minimal for any $I$?





  • $I in mathbb{R}^{20x20}, A in mathbb{R}^{5x5}, B in mathbb{R}^{3x3}. $ Note that $B$ is smaller than $A$.

  • I $circledast$ K is a convolution on $I$ with kernel $K$. The result is padded with zeros as to match the shape of input $I$, this means that $(I circledast K ) in mathbb{R}^{20x20}$.


Does a closed form exist? Do I need to use a Fourier transform? As an extension, how does this work when $A,B$ are not square, and can be of arbitrary size (instead of the special case here where $A$ is larger than $B$?










share|cite|improve this question











$endgroup$




Given $A$, what should be B so that



$lVert I circledast A - I circledast B rVert _2$



is minimal for any $I$?





  • $I in mathbb{R}^{20x20}, A in mathbb{R}^{5x5}, B in mathbb{R}^{3x3}. $ Note that $B$ is smaller than $A$.

  • I $circledast$ K is a convolution on $I$ with kernel $K$. The result is padded with zeros as to match the shape of input $I$, this means that $(I circledast K ) in mathbb{R}^{20x20}$.


Does a closed form exist? Do I need to use a Fourier transform? As an extension, how does this work when $A,B$ are not square, and can be of arbitrary size (instead of the special case here where $A$ is larger than $B$?







optimization fourier-transform convolution fast-fourier-transform






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 19 '18 at 4:11









Eric Wofsey

186k14215342




186k14215342










asked Dec 9 '18 at 16:54









Frederik BodeFrederik Bode

133




133












  • $begingroup$
    Interesting problem. In other words, you're looking for a smaller filter that works as close as possible to the bigger one. In that case, maybe $B$ should be padded so that its center matches $A$'s. The convolution operator is linear i.e. $Icircledast A - Icircledast B = Icircledast (A-B)$. You can apply Fourier transform, and make use of Parseval's theorem, but from there I don't know how to proceed...
    $endgroup$
    – syockit
    Dec 12 '18 at 15:59










  • $begingroup$
    Yeah I think so too! I built a small NN that optimized the objective function and it always learnt to match the center of A so I guess that must be it. I still can't prove it though.
    $endgroup$
    – Frederik Bode
    Dec 13 '18 at 18:15


















  • $begingroup$
    Interesting problem. In other words, you're looking for a smaller filter that works as close as possible to the bigger one. In that case, maybe $B$ should be padded so that its center matches $A$'s. The convolution operator is linear i.e. $Icircledast A - Icircledast B = Icircledast (A-B)$. You can apply Fourier transform, and make use of Parseval's theorem, but from there I don't know how to proceed...
    $endgroup$
    – syockit
    Dec 12 '18 at 15:59










  • $begingroup$
    Yeah I think so too! I built a small NN that optimized the objective function and it always learnt to match the center of A so I guess that must be it. I still can't prove it though.
    $endgroup$
    – Frederik Bode
    Dec 13 '18 at 18:15
















$begingroup$
Interesting problem. In other words, you're looking for a smaller filter that works as close as possible to the bigger one. In that case, maybe $B$ should be padded so that its center matches $A$'s. The convolution operator is linear i.e. $Icircledast A - Icircledast B = Icircledast (A-B)$. You can apply Fourier transform, and make use of Parseval's theorem, but from there I don't know how to proceed...
$endgroup$
– syockit
Dec 12 '18 at 15:59




$begingroup$
Interesting problem. In other words, you're looking for a smaller filter that works as close as possible to the bigger one. In that case, maybe $B$ should be padded so that its center matches $A$'s. The convolution operator is linear i.e. $Icircledast A - Icircledast B = Icircledast (A-B)$. You can apply Fourier transform, and make use of Parseval's theorem, but from there I don't know how to proceed...
$endgroup$
– syockit
Dec 12 '18 at 15:59












$begingroup$
Yeah I think so too! I built a small NN that optimized the objective function and it always learnt to match the center of A so I guess that must be it. I still can't prove it though.
$endgroup$
– Frederik Bode
Dec 13 '18 at 18:15




$begingroup$
Yeah I think so too! I built a small NN that optimized the objective function and it always learnt to match the center of A so I guess that must be it. I still can't prove it though.
$endgroup$
– Frederik Bode
Dec 13 '18 at 18:15










1 Answer
1






active

oldest

votes


















0












$begingroup$

I will start with the case of one dimension first.
Convolution is linear
$$z=I*B-I*A=I*(B-A)$$
Expand to definition of convolution
$$z_n = sum_{n'}I_{n-n'}(B_{n'}-A_{n'})$$
Square of norm is
$$|I*B-I*A|^2=sum_nz_n^2$$
To find the minimum (proof that this yields minimum is shown later), do partial differentiation on each element of $B$
$${partialoverpartial B_p}sum_nz_n^2=2sum_nz_n{partial z_noverpartial B_p} = 0 tag{1}label{eq1}$$
where $m$ is valid index of $B$. Partial differentiation of convolution is
$${partial z_noverpartial B_p}=I_{n-p} tag{2}label{eq2}$$
therefore $eqref{eq1}$ is
$$sum_nz_n{partial z_noverpartial B_p} = sum_nI_{n-p}z_n = sum_nsum_{n'}I_{n-p}I_{n-n'}(B_{n'}-A_{n'})=0$$
giving us the following relation
$$sum_nsum_{n'in B}I_{n-p}I_{n-n'}B_{n'} = sum_nsum_{n'in A}I_{n-p}I_{n-n'}A_{n'}tag{3}label{eq3}$$
This is a linear system of equations that can be solved using matrix methods. Note that the notation $n'in A$ is just a notation to mean the indices where $A$ is defined.



With change of variable
$$n=k+p$$
we make things more readable, and find that the coefficient is auto-correlation of $I$.
$$begin{aligned}sum_nsum_{n'}I_{n-p}I_{n-n'}A_{n'} &=sum_psum_{n'}I_kI_{k+p-n'}A_{n'} \ &= sum_{n'}(Istar I)_{p-n'}A_{n'}end{aligned}$$
using this on both sides of $eqref{eq1}$, we get
$$sum_{n'in B}(Istar I)_{p-n'}B_{n'}=sum_{n'in A}(Istar I)_{p-n'}A_{n'}$$
RHS consists of only constants, and autocorrelation of real values are symmetric about index 0. Therefore
$$boxed{sum_{n'in B}(Istar I)_{p-n'}B_{n'}=[(Istar I)*A]_p}$$
There are as many $p$ as there are $n'$ on the LHS, and also due to the symmetry of $Istar I$, the coeffiecients form a symmetric circulant matrix. Solve this system to get the optimal $B$.



Unfortunately, the above also says that optimal $B$ depends on $I$ as well. Also, as I've mentioned in the comment, if you want the centers of $A$ and $B$ to match, you have to pad $B$ accordingly on both sides.



Proof that this minimizes $|z|$



From $eqref{eq2}$ we see that second order derivative does not exist. Hessian of $|z|$ is then $$begin{aligned}mathrm{H}_{pq}&={partial^2overpartial B_ppartial B_q}sum_nz_n^2=2{partialoverpartial B_q}sum_nz_n{partial z_noverpartial B_p}\
&= 2{partialoverpartial B_q}sum_nz_nI_{n-p} \
&= 2sum_nI_{n-p}I_{n-q}
end{aligned}$$

$I_{n-p}$ is an element of a circulant matrix $C$. The Hessian can be rewritten as
$$mathrm{H} = 2C^TC$$
so the second derivative test is
$$det(mathrm{H}) = 2det(C^TC)=2det(C^T)det(C)=2[det(C)]^2geq 0$$
The test is inconclusive when $det(C)=0$, which happens when you have a flat image.



2D version



The line of reasoning is the same as the 1D version. For convolution
$$z_{mn} = sum_{m'n'}I_{(m-m')(n-n')}(B_{m'n'}-A_{m'n'})$$
minimizing the norm of the above is equivalent to solving the linear equation
$$boxed{sum_{m'n'in B}(Istar I)_{(p-m')(q-n')}B_{m'n'}=[(Istar I)*A]_{p'q'}}$$
The matrix on the LHS is no longer a symmetric circulant matrix. But if you index $B$ row by row (or maybe column by column, I've not tried that) you instead get a symmetric block Toeplitz matrix.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3032600%2fhow-to-minimize-sum-of-matrix-convolutions%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    I will start with the case of one dimension first.
    Convolution is linear
    $$z=I*B-I*A=I*(B-A)$$
    Expand to definition of convolution
    $$z_n = sum_{n'}I_{n-n'}(B_{n'}-A_{n'})$$
    Square of norm is
    $$|I*B-I*A|^2=sum_nz_n^2$$
    To find the minimum (proof that this yields minimum is shown later), do partial differentiation on each element of $B$
    $${partialoverpartial B_p}sum_nz_n^2=2sum_nz_n{partial z_noverpartial B_p} = 0 tag{1}label{eq1}$$
    where $m$ is valid index of $B$. Partial differentiation of convolution is
    $${partial z_noverpartial B_p}=I_{n-p} tag{2}label{eq2}$$
    therefore $eqref{eq1}$ is
    $$sum_nz_n{partial z_noverpartial B_p} = sum_nI_{n-p}z_n = sum_nsum_{n'}I_{n-p}I_{n-n'}(B_{n'}-A_{n'})=0$$
    giving us the following relation
    $$sum_nsum_{n'in B}I_{n-p}I_{n-n'}B_{n'} = sum_nsum_{n'in A}I_{n-p}I_{n-n'}A_{n'}tag{3}label{eq3}$$
    This is a linear system of equations that can be solved using matrix methods. Note that the notation $n'in A$ is just a notation to mean the indices where $A$ is defined.



    With change of variable
    $$n=k+p$$
    we make things more readable, and find that the coefficient is auto-correlation of $I$.
    $$begin{aligned}sum_nsum_{n'}I_{n-p}I_{n-n'}A_{n'} &=sum_psum_{n'}I_kI_{k+p-n'}A_{n'} \ &= sum_{n'}(Istar I)_{p-n'}A_{n'}end{aligned}$$
    using this on both sides of $eqref{eq1}$, we get
    $$sum_{n'in B}(Istar I)_{p-n'}B_{n'}=sum_{n'in A}(Istar I)_{p-n'}A_{n'}$$
    RHS consists of only constants, and autocorrelation of real values are symmetric about index 0. Therefore
    $$boxed{sum_{n'in B}(Istar I)_{p-n'}B_{n'}=[(Istar I)*A]_p}$$
    There are as many $p$ as there are $n'$ on the LHS, and also due to the symmetry of $Istar I$, the coeffiecients form a symmetric circulant matrix. Solve this system to get the optimal $B$.



    Unfortunately, the above also says that optimal $B$ depends on $I$ as well. Also, as I've mentioned in the comment, if you want the centers of $A$ and $B$ to match, you have to pad $B$ accordingly on both sides.



    Proof that this minimizes $|z|$



    From $eqref{eq2}$ we see that second order derivative does not exist. Hessian of $|z|$ is then $$begin{aligned}mathrm{H}_{pq}&={partial^2overpartial B_ppartial B_q}sum_nz_n^2=2{partialoverpartial B_q}sum_nz_n{partial z_noverpartial B_p}\
    &= 2{partialoverpartial B_q}sum_nz_nI_{n-p} \
    &= 2sum_nI_{n-p}I_{n-q}
    end{aligned}$$

    $I_{n-p}$ is an element of a circulant matrix $C$. The Hessian can be rewritten as
    $$mathrm{H} = 2C^TC$$
    so the second derivative test is
    $$det(mathrm{H}) = 2det(C^TC)=2det(C^T)det(C)=2[det(C)]^2geq 0$$
    The test is inconclusive when $det(C)=0$, which happens when you have a flat image.



    2D version



    The line of reasoning is the same as the 1D version. For convolution
    $$z_{mn} = sum_{m'n'}I_{(m-m')(n-n')}(B_{m'n'}-A_{m'n'})$$
    minimizing the norm of the above is equivalent to solving the linear equation
    $$boxed{sum_{m'n'in B}(Istar I)_{(p-m')(q-n')}B_{m'n'}=[(Istar I)*A]_{p'q'}}$$
    The matrix on the LHS is no longer a symmetric circulant matrix. But if you index $B$ row by row (or maybe column by column, I've not tried that) you instead get a symmetric block Toeplitz matrix.






    share|cite|improve this answer











    $endgroup$


















      0












      $begingroup$

      I will start with the case of one dimension first.
      Convolution is linear
      $$z=I*B-I*A=I*(B-A)$$
      Expand to definition of convolution
      $$z_n = sum_{n'}I_{n-n'}(B_{n'}-A_{n'})$$
      Square of norm is
      $$|I*B-I*A|^2=sum_nz_n^2$$
      To find the minimum (proof that this yields minimum is shown later), do partial differentiation on each element of $B$
      $${partialoverpartial B_p}sum_nz_n^2=2sum_nz_n{partial z_noverpartial B_p} = 0 tag{1}label{eq1}$$
      where $m$ is valid index of $B$. Partial differentiation of convolution is
      $${partial z_noverpartial B_p}=I_{n-p} tag{2}label{eq2}$$
      therefore $eqref{eq1}$ is
      $$sum_nz_n{partial z_noverpartial B_p} = sum_nI_{n-p}z_n = sum_nsum_{n'}I_{n-p}I_{n-n'}(B_{n'}-A_{n'})=0$$
      giving us the following relation
      $$sum_nsum_{n'in B}I_{n-p}I_{n-n'}B_{n'} = sum_nsum_{n'in A}I_{n-p}I_{n-n'}A_{n'}tag{3}label{eq3}$$
      This is a linear system of equations that can be solved using matrix methods. Note that the notation $n'in A$ is just a notation to mean the indices where $A$ is defined.



      With change of variable
      $$n=k+p$$
      we make things more readable, and find that the coefficient is auto-correlation of $I$.
      $$begin{aligned}sum_nsum_{n'}I_{n-p}I_{n-n'}A_{n'} &=sum_psum_{n'}I_kI_{k+p-n'}A_{n'} \ &= sum_{n'}(Istar I)_{p-n'}A_{n'}end{aligned}$$
      using this on both sides of $eqref{eq1}$, we get
      $$sum_{n'in B}(Istar I)_{p-n'}B_{n'}=sum_{n'in A}(Istar I)_{p-n'}A_{n'}$$
      RHS consists of only constants, and autocorrelation of real values are symmetric about index 0. Therefore
      $$boxed{sum_{n'in B}(Istar I)_{p-n'}B_{n'}=[(Istar I)*A]_p}$$
      There are as many $p$ as there are $n'$ on the LHS, and also due to the symmetry of $Istar I$, the coeffiecients form a symmetric circulant matrix. Solve this system to get the optimal $B$.



      Unfortunately, the above also says that optimal $B$ depends on $I$ as well. Also, as I've mentioned in the comment, if you want the centers of $A$ and $B$ to match, you have to pad $B$ accordingly on both sides.



      Proof that this minimizes $|z|$



      From $eqref{eq2}$ we see that second order derivative does not exist. Hessian of $|z|$ is then $$begin{aligned}mathrm{H}_{pq}&={partial^2overpartial B_ppartial B_q}sum_nz_n^2=2{partialoverpartial B_q}sum_nz_n{partial z_noverpartial B_p}\
      &= 2{partialoverpartial B_q}sum_nz_nI_{n-p} \
      &= 2sum_nI_{n-p}I_{n-q}
      end{aligned}$$

      $I_{n-p}$ is an element of a circulant matrix $C$. The Hessian can be rewritten as
      $$mathrm{H} = 2C^TC$$
      so the second derivative test is
      $$det(mathrm{H}) = 2det(C^TC)=2det(C^T)det(C)=2[det(C)]^2geq 0$$
      The test is inconclusive when $det(C)=0$, which happens when you have a flat image.



      2D version



      The line of reasoning is the same as the 1D version. For convolution
      $$z_{mn} = sum_{m'n'}I_{(m-m')(n-n')}(B_{m'n'}-A_{m'n'})$$
      minimizing the norm of the above is equivalent to solving the linear equation
      $$boxed{sum_{m'n'in B}(Istar I)_{(p-m')(q-n')}B_{m'n'}=[(Istar I)*A]_{p'q'}}$$
      The matrix on the LHS is no longer a symmetric circulant matrix. But if you index $B$ row by row (or maybe column by column, I've not tried that) you instead get a symmetric block Toeplitz matrix.






      share|cite|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        I will start with the case of one dimension first.
        Convolution is linear
        $$z=I*B-I*A=I*(B-A)$$
        Expand to definition of convolution
        $$z_n = sum_{n'}I_{n-n'}(B_{n'}-A_{n'})$$
        Square of norm is
        $$|I*B-I*A|^2=sum_nz_n^2$$
        To find the minimum (proof that this yields minimum is shown later), do partial differentiation on each element of $B$
        $${partialoverpartial B_p}sum_nz_n^2=2sum_nz_n{partial z_noverpartial B_p} = 0 tag{1}label{eq1}$$
        where $m$ is valid index of $B$. Partial differentiation of convolution is
        $${partial z_noverpartial B_p}=I_{n-p} tag{2}label{eq2}$$
        therefore $eqref{eq1}$ is
        $$sum_nz_n{partial z_noverpartial B_p} = sum_nI_{n-p}z_n = sum_nsum_{n'}I_{n-p}I_{n-n'}(B_{n'}-A_{n'})=0$$
        giving us the following relation
        $$sum_nsum_{n'in B}I_{n-p}I_{n-n'}B_{n'} = sum_nsum_{n'in A}I_{n-p}I_{n-n'}A_{n'}tag{3}label{eq3}$$
        This is a linear system of equations that can be solved using matrix methods. Note that the notation $n'in A$ is just a notation to mean the indices where $A$ is defined.



        With change of variable
        $$n=k+p$$
        we make things more readable, and find that the coefficient is auto-correlation of $I$.
        $$begin{aligned}sum_nsum_{n'}I_{n-p}I_{n-n'}A_{n'} &=sum_psum_{n'}I_kI_{k+p-n'}A_{n'} \ &= sum_{n'}(Istar I)_{p-n'}A_{n'}end{aligned}$$
        using this on both sides of $eqref{eq1}$, we get
        $$sum_{n'in B}(Istar I)_{p-n'}B_{n'}=sum_{n'in A}(Istar I)_{p-n'}A_{n'}$$
        RHS consists of only constants, and autocorrelation of real values are symmetric about index 0. Therefore
        $$boxed{sum_{n'in B}(Istar I)_{p-n'}B_{n'}=[(Istar I)*A]_p}$$
        There are as many $p$ as there are $n'$ on the LHS, and also due to the symmetry of $Istar I$, the coeffiecients form a symmetric circulant matrix. Solve this system to get the optimal $B$.



        Unfortunately, the above also says that optimal $B$ depends on $I$ as well. Also, as I've mentioned in the comment, if you want the centers of $A$ and $B$ to match, you have to pad $B$ accordingly on both sides.



        Proof that this minimizes $|z|$



        From $eqref{eq2}$ we see that second order derivative does not exist. Hessian of $|z|$ is then $$begin{aligned}mathrm{H}_{pq}&={partial^2overpartial B_ppartial B_q}sum_nz_n^2=2{partialoverpartial B_q}sum_nz_n{partial z_noverpartial B_p}\
        &= 2{partialoverpartial B_q}sum_nz_nI_{n-p} \
        &= 2sum_nI_{n-p}I_{n-q}
        end{aligned}$$

        $I_{n-p}$ is an element of a circulant matrix $C$. The Hessian can be rewritten as
        $$mathrm{H} = 2C^TC$$
        so the second derivative test is
        $$det(mathrm{H}) = 2det(C^TC)=2det(C^T)det(C)=2[det(C)]^2geq 0$$
        The test is inconclusive when $det(C)=0$, which happens when you have a flat image.



        2D version



        The line of reasoning is the same as the 1D version. For convolution
        $$z_{mn} = sum_{m'n'}I_{(m-m')(n-n')}(B_{m'n'}-A_{m'n'})$$
        minimizing the norm of the above is equivalent to solving the linear equation
        $$boxed{sum_{m'n'in B}(Istar I)_{(p-m')(q-n')}B_{m'n'}=[(Istar I)*A]_{p'q'}}$$
        The matrix on the LHS is no longer a symmetric circulant matrix. But if you index $B$ row by row (or maybe column by column, I've not tried that) you instead get a symmetric block Toeplitz matrix.






        share|cite|improve this answer











        $endgroup$



        I will start with the case of one dimension first.
        Convolution is linear
        $$z=I*B-I*A=I*(B-A)$$
        Expand to definition of convolution
        $$z_n = sum_{n'}I_{n-n'}(B_{n'}-A_{n'})$$
        Square of norm is
        $$|I*B-I*A|^2=sum_nz_n^2$$
        To find the minimum (proof that this yields minimum is shown later), do partial differentiation on each element of $B$
        $${partialoverpartial B_p}sum_nz_n^2=2sum_nz_n{partial z_noverpartial B_p} = 0 tag{1}label{eq1}$$
        where $m$ is valid index of $B$. Partial differentiation of convolution is
        $${partial z_noverpartial B_p}=I_{n-p} tag{2}label{eq2}$$
        therefore $eqref{eq1}$ is
        $$sum_nz_n{partial z_noverpartial B_p} = sum_nI_{n-p}z_n = sum_nsum_{n'}I_{n-p}I_{n-n'}(B_{n'}-A_{n'})=0$$
        giving us the following relation
        $$sum_nsum_{n'in B}I_{n-p}I_{n-n'}B_{n'} = sum_nsum_{n'in A}I_{n-p}I_{n-n'}A_{n'}tag{3}label{eq3}$$
        This is a linear system of equations that can be solved using matrix methods. Note that the notation $n'in A$ is just a notation to mean the indices where $A$ is defined.



        With change of variable
        $$n=k+p$$
        we make things more readable, and find that the coefficient is auto-correlation of $I$.
        $$begin{aligned}sum_nsum_{n'}I_{n-p}I_{n-n'}A_{n'} &=sum_psum_{n'}I_kI_{k+p-n'}A_{n'} \ &= sum_{n'}(Istar I)_{p-n'}A_{n'}end{aligned}$$
        using this on both sides of $eqref{eq1}$, we get
        $$sum_{n'in B}(Istar I)_{p-n'}B_{n'}=sum_{n'in A}(Istar I)_{p-n'}A_{n'}$$
        RHS consists of only constants, and autocorrelation of real values are symmetric about index 0. Therefore
        $$boxed{sum_{n'in B}(Istar I)_{p-n'}B_{n'}=[(Istar I)*A]_p}$$
        There are as many $p$ as there are $n'$ on the LHS, and also due to the symmetry of $Istar I$, the coeffiecients form a symmetric circulant matrix. Solve this system to get the optimal $B$.



        Unfortunately, the above also says that optimal $B$ depends on $I$ as well. Also, as I've mentioned in the comment, if you want the centers of $A$ and $B$ to match, you have to pad $B$ accordingly on both sides.



        Proof that this minimizes $|z|$



        From $eqref{eq2}$ we see that second order derivative does not exist. Hessian of $|z|$ is then $$begin{aligned}mathrm{H}_{pq}&={partial^2overpartial B_ppartial B_q}sum_nz_n^2=2{partialoverpartial B_q}sum_nz_n{partial z_noverpartial B_p}\
        &= 2{partialoverpartial B_q}sum_nz_nI_{n-p} \
        &= 2sum_nI_{n-p}I_{n-q}
        end{aligned}$$

        $I_{n-p}$ is an element of a circulant matrix $C$. The Hessian can be rewritten as
        $$mathrm{H} = 2C^TC$$
        so the second derivative test is
        $$det(mathrm{H}) = 2det(C^TC)=2det(C^T)det(C)=2[det(C)]^2geq 0$$
        The test is inconclusive when $det(C)=0$, which happens when you have a flat image.



        2D version



        The line of reasoning is the same as the 1D version. For convolution
        $$z_{mn} = sum_{m'n'}I_{(m-m')(n-n')}(B_{m'n'}-A_{m'n'})$$
        minimizing the norm of the above is equivalent to solving the linear equation
        $$boxed{sum_{m'n'in B}(Istar I)_{(p-m')(q-n')}B_{m'n'}=[(Istar I)*A]_{p'q'}}$$
        The matrix on the LHS is no longer a symmetric circulant matrix. But if you index $B$ row by row (or maybe column by column, I've not tried that) you instead get a symmetric block Toeplitz matrix.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 15 '18 at 13:09

























        answered Dec 15 '18 at 13:00









        syockitsyockit

        257310




        257310






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3032600%2fhow-to-minimize-sum-of-matrix-convolutions%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

            How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...