Consensus Douglas Rachford for the $x^* = arg min_{x in mathbb{R}^N} Vert y - xVert^2 text{ s.t. } x in...












0












$begingroup$


I would like to solve the following projection problem utilizing some FAST first-order methods, e.g., Consensus ADMM or DR, etc.



begin{align}
x^* = arg min_{x in mathbb{R}^N} Vert y - xVert^2 text{ s.t. } x in bigcap_{i=1}^m C_i ,
end{align}

where $y in mathbb{R}^{N times 1}$ and example, $C_i = left{x : a_i^T x leq bright}$. For my case, the sets are convex.



I played with consensus ADMM (borrowed a recipe from an article) for the above example where $C_i = left{x_i : a_i^T x_i leq b_iright}$, but the number of iterations to converge are really long. I also noticed some suggestion here, but I couldn't derive the Douglas-rachford suggestion (in fact I got stuck because my mathematics is not strong. So please excuse me). However, I can implement the other solution steepest descent (SD) though. This SD has also the same problem as ADMM that it takes so many iterations to converge (e.g., more than 1000 iterations :( ).



Questions:




  1. I am interested in deriving the consensus DR suggestion here.Can you help me?


  2. Also, do you have any suggestions to consider some FAST first order methods? (using some black box solvers, e.g., CVX,





Looking at the proposal here, I have pasted function splitting below for convenience



begin{align*}
text{minimize} & quad underbrace{|y - x | + sum_i delta_i(x_i)}_{f(x_0,x_1,ldots,x_m)} + underbrace{delta_S(x_0,x_1,ldots,x_m)}_{g(x_0,x_1,ldots,x_m)}
end{align*}

where
begin{equation}
S = {(x_0,x_1,ldots,x_m) mid x_0 = x_1 = cdots = x_m }
end{equation}

and $delta_S$ is the indicator function of $S$.
The variables in this reformulated problem are the vectors $x_0,x_1,ldots,x_m$.
The indicator function $delta_S$ is being used to enforce the constraint that all these vectors should be equal.



Following the comments in the same link, the prox operators are



$(overline{x}, overline{x}, cdots, overline{x}) = {rm prox}_gleft(x_1, x_2, cdots, x_mright) ,$ where $overline{x} = frac{1}{m} sum_i x_i$.



if $C_i = left{x_i : a_i^T x_i leq b_i right}$, then the prox operator is



begin{align*}
P_{{C_i}}left(x_iright) =
left{
begin{matrix}
x_i + frac{left(b_i - a_i^{T} x_i right)}{left|a_iright|_2^2} a_i, & text { if } a_i^{T} x_i > b_i; \
x_i, & text { if } a_i^{T} x_i leq b_i.
end{matrix}
right.
end{align*}



Now I have both prox operators, then my problem is how to plug it in the DR routine (sorry if it is obvious for you)



begin{align}
x^{(t)} &= {rm prox}_f(z^{(t-1)}) \
z^{(t)} &= z^{(t-1)} + {rm prox}_g(2x^{(t)}-z^{(t-1)}) - x^{(t)}
end{align}










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    I would like to solve the following projection problem utilizing some FAST first-order methods, e.g., Consensus ADMM or DR, etc.



    begin{align}
    x^* = arg min_{x in mathbb{R}^N} Vert y - xVert^2 text{ s.t. } x in bigcap_{i=1}^m C_i ,
    end{align}

    where $y in mathbb{R}^{N times 1}$ and example, $C_i = left{x : a_i^T x leq bright}$. For my case, the sets are convex.



    I played with consensus ADMM (borrowed a recipe from an article) for the above example where $C_i = left{x_i : a_i^T x_i leq b_iright}$, but the number of iterations to converge are really long. I also noticed some suggestion here, but I couldn't derive the Douglas-rachford suggestion (in fact I got stuck because my mathematics is not strong. So please excuse me). However, I can implement the other solution steepest descent (SD) though. This SD has also the same problem as ADMM that it takes so many iterations to converge (e.g., more than 1000 iterations :( ).



    Questions:




    1. I am interested in deriving the consensus DR suggestion here.Can you help me?


    2. Also, do you have any suggestions to consider some FAST first order methods? (using some black box solvers, e.g., CVX,





    Looking at the proposal here, I have pasted function splitting below for convenience



    begin{align*}
    text{minimize} & quad underbrace{|y - x | + sum_i delta_i(x_i)}_{f(x_0,x_1,ldots,x_m)} + underbrace{delta_S(x_0,x_1,ldots,x_m)}_{g(x_0,x_1,ldots,x_m)}
    end{align*}

    where
    begin{equation}
    S = {(x_0,x_1,ldots,x_m) mid x_0 = x_1 = cdots = x_m }
    end{equation}

    and $delta_S$ is the indicator function of $S$.
    The variables in this reformulated problem are the vectors $x_0,x_1,ldots,x_m$.
    The indicator function $delta_S$ is being used to enforce the constraint that all these vectors should be equal.



    Following the comments in the same link, the prox operators are



    $(overline{x}, overline{x}, cdots, overline{x}) = {rm prox}_gleft(x_1, x_2, cdots, x_mright) ,$ where $overline{x} = frac{1}{m} sum_i x_i$.



    if $C_i = left{x_i : a_i^T x_i leq b_i right}$, then the prox operator is



    begin{align*}
    P_{{C_i}}left(x_iright) =
    left{
    begin{matrix}
    x_i + frac{left(b_i - a_i^{T} x_i right)}{left|a_iright|_2^2} a_i, & text { if } a_i^{T} x_i > b_i; \
    x_i, & text { if } a_i^{T} x_i leq b_i.
    end{matrix}
    right.
    end{align*}



    Now I have both prox operators, then my problem is how to plug it in the DR routine (sorry if it is obvious for you)



    begin{align}
    x^{(t)} &= {rm prox}_f(z^{(t-1)}) \
    z^{(t)} &= z^{(t-1)} + {rm prox}_g(2x^{(t)}-z^{(t-1)}) - x^{(t)}
    end{align}










    share|cite|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      I would like to solve the following projection problem utilizing some FAST first-order methods, e.g., Consensus ADMM or DR, etc.



      begin{align}
      x^* = arg min_{x in mathbb{R}^N} Vert y - xVert^2 text{ s.t. } x in bigcap_{i=1}^m C_i ,
      end{align}

      where $y in mathbb{R}^{N times 1}$ and example, $C_i = left{x : a_i^T x leq bright}$. For my case, the sets are convex.



      I played with consensus ADMM (borrowed a recipe from an article) for the above example where $C_i = left{x_i : a_i^T x_i leq b_iright}$, but the number of iterations to converge are really long. I also noticed some suggestion here, but I couldn't derive the Douglas-rachford suggestion (in fact I got stuck because my mathematics is not strong. So please excuse me). However, I can implement the other solution steepest descent (SD) though. This SD has also the same problem as ADMM that it takes so many iterations to converge (e.g., more than 1000 iterations :( ).



      Questions:




      1. I am interested in deriving the consensus DR suggestion here.Can you help me?


      2. Also, do you have any suggestions to consider some FAST first order methods? (using some black box solvers, e.g., CVX,





      Looking at the proposal here, I have pasted function splitting below for convenience



      begin{align*}
      text{minimize} & quad underbrace{|y - x | + sum_i delta_i(x_i)}_{f(x_0,x_1,ldots,x_m)} + underbrace{delta_S(x_0,x_1,ldots,x_m)}_{g(x_0,x_1,ldots,x_m)}
      end{align*}

      where
      begin{equation}
      S = {(x_0,x_1,ldots,x_m) mid x_0 = x_1 = cdots = x_m }
      end{equation}

      and $delta_S$ is the indicator function of $S$.
      The variables in this reformulated problem are the vectors $x_0,x_1,ldots,x_m$.
      The indicator function $delta_S$ is being used to enforce the constraint that all these vectors should be equal.



      Following the comments in the same link, the prox operators are



      $(overline{x}, overline{x}, cdots, overline{x}) = {rm prox}_gleft(x_1, x_2, cdots, x_mright) ,$ where $overline{x} = frac{1}{m} sum_i x_i$.



      if $C_i = left{x_i : a_i^T x_i leq b_i right}$, then the prox operator is



      begin{align*}
      P_{{C_i}}left(x_iright) =
      left{
      begin{matrix}
      x_i + frac{left(b_i - a_i^{T} x_i right)}{left|a_iright|_2^2} a_i, & text { if } a_i^{T} x_i > b_i; \
      x_i, & text { if } a_i^{T} x_i leq b_i.
      end{matrix}
      right.
      end{align*}



      Now I have both prox operators, then my problem is how to plug it in the DR routine (sorry if it is obvious for you)



      begin{align}
      x^{(t)} &= {rm prox}_f(z^{(t-1)}) \
      z^{(t)} &= z^{(t-1)} + {rm prox}_g(2x^{(t)}-z^{(t-1)}) - x^{(t)}
      end{align}










      share|cite|improve this question









      $endgroup$




      I would like to solve the following projection problem utilizing some FAST first-order methods, e.g., Consensus ADMM or DR, etc.



      begin{align}
      x^* = arg min_{x in mathbb{R}^N} Vert y - xVert^2 text{ s.t. } x in bigcap_{i=1}^m C_i ,
      end{align}

      where $y in mathbb{R}^{N times 1}$ and example, $C_i = left{x : a_i^T x leq bright}$. For my case, the sets are convex.



      I played with consensus ADMM (borrowed a recipe from an article) for the above example where $C_i = left{x_i : a_i^T x_i leq b_iright}$, but the number of iterations to converge are really long. I also noticed some suggestion here, but I couldn't derive the Douglas-rachford suggestion (in fact I got stuck because my mathematics is not strong. So please excuse me). However, I can implement the other solution steepest descent (SD) though. This SD has also the same problem as ADMM that it takes so many iterations to converge (e.g., more than 1000 iterations :( ).



      Questions:




      1. I am interested in deriving the consensus DR suggestion here.Can you help me?


      2. Also, do you have any suggestions to consider some FAST first order methods? (using some black box solvers, e.g., CVX,





      Looking at the proposal here, I have pasted function splitting below for convenience



      begin{align*}
      text{minimize} & quad underbrace{|y - x | + sum_i delta_i(x_i)}_{f(x_0,x_1,ldots,x_m)} + underbrace{delta_S(x_0,x_1,ldots,x_m)}_{g(x_0,x_1,ldots,x_m)}
      end{align*}

      where
      begin{equation}
      S = {(x_0,x_1,ldots,x_m) mid x_0 = x_1 = cdots = x_m }
      end{equation}

      and $delta_S$ is the indicator function of $S$.
      The variables in this reformulated problem are the vectors $x_0,x_1,ldots,x_m$.
      The indicator function $delta_S$ is being used to enforce the constraint that all these vectors should be equal.



      Following the comments in the same link, the prox operators are



      $(overline{x}, overline{x}, cdots, overline{x}) = {rm prox}_gleft(x_1, x_2, cdots, x_mright) ,$ where $overline{x} = frac{1}{m} sum_i x_i$.



      if $C_i = left{x_i : a_i^T x_i leq b_i right}$, then the prox operator is



      begin{align*}
      P_{{C_i}}left(x_iright) =
      left{
      begin{matrix}
      x_i + frac{left(b_i - a_i^{T} x_i right)}{left|a_iright|_2^2} a_i, & text { if } a_i^{T} x_i > b_i; \
      x_i, & text { if } a_i^{T} x_i leq b_i.
      end{matrix}
      right.
      end{align*}



      Now I have both prox operators, then my problem is how to plug it in the DR routine (sorry if it is obvious for you)



      begin{align}
      x^{(t)} &= {rm prox}_f(z^{(t-1)}) \
      z^{(t)} &= z^{(t-1)} + {rm prox}_g(2x^{(t)}-z^{(t-1)}) - x^{(t)}
      end{align}







      optimization convex-optimization






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Nov 28 '18 at 18:52









      learninglearning

      346




      346






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3017544%2fconsensus-douglas-rachford-for-the-x-arg-min-x-in-mathbbrn-vert-y%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3017544%2fconsensus-douglas-rachford-for-the-x-arg-min-x-in-mathbbrn-vert-y%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Plaza Victoria

          Puebla de Zaragoza

          Musa