Is There an $ {L}_{1} $ Norm Equivalent to Ordinary Least Squares?












4












$begingroup$


The ordinary least squares (OLS) method is very useful. It gives you the solution to the problem



$$ arg min_{x} {left| A x - b right|}_{2}^{2} $$



Now, if the problem is the same, but the $1$-norm is used instead:



$$ arg min_{x} {left| A x - b right|}_{1} $$



Is there a known (approximate or not) solution to that problem? Any time efficient algorithm to get this optimum?
I've read of the Theil-Sen estimator which should do the trick in dimension $2$, and some multidimentionnal extension of it, but the algorithm computation time increases hugely with dimension, I don't think I'll get any solution before a year if I use that.










share|cite|improve this question











$endgroup$












  • $begingroup$
    There is no known formula for the solution, as there is for least squares problems. Minimizing $| Ax - b |_1$ is a convex problem and can be solved with standard convex optimization algorithms. An easy way to do it is using the CVX or CVXPY software package, which will let you solve small instances of this problem (with a few hundred variables perhaps) using just a few lines of readable code. How large is your problem?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:19










  • $begingroup$
    Is that supposed to be $ell_1$ rather than $N_1$ in the title?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:20










  • $begingroup$
    @littleO : I will investigate this. The data size is ~20 variables and ~1000 observations; but needs to be done in a loop, thus needs to be fast. Thanks!
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:21










  • $begingroup$
    @littleO : im my maths curses, we used N1 as the name of the "norm 1", or the sum of absolute values. It might differ from country to country... I'll edit
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:22












  • $begingroup$
    Ah, that's a small problem. CVX or CVXPY is a good thing to try at first. If it's not fast enough, you could formulate the problem as a linear program and use some good LP solver such as Mosek.
    $endgroup$
    – littleO
    Jul 10 '18 at 16:23
















4












$begingroup$


The ordinary least squares (OLS) method is very useful. It gives you the solution to the problem



$$ arg min_{x} {left| A x - b right|}_{2}^{2} $$



Now, if the problem is the same, but the $1$-norm is used instead:



$$ arg min_{x} {left| A x - b right|}_{1} $$



Is there a known (approximate or not) solution to that problem? Any time efficient algorithm to get this optimum?
I've read of the Theil-Sen estimator which should do the trick in dimension $2$, and some multidimentionnal extension of it, but the algorithm computation time increases hugely with dimension, I don't think I'll get any solution before a year if I use that.










share|cite|improve this question











$endgroup$












  • $begingroup$
    There is no known formula for the solution, as there is for least squares problems. Minimizing $| Ax - b |_1$ is a convex problem and can be solved with standard convex optimization algorithms. An easy way to do it is using the CVX or CVXPY software package, which will let you solve small instances of this problem (with a few hundred variables perhaps) using just a few lines of readable code. How large is your problem?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:19










  • $begingroup$
    Is that supposed to be $ell_1$ rather than $N_1$ in the title?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:20










  • $begingroup$
    @littleO : I will investigate this. The data size is ~20 variables and ~1000 observations; but needs to be done in a loop, thus needs to be fast. Thanks!
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:21










  • $begingroup$
    @littleO : im my maths curses, we used N1 as the name of the "norm 1", or the sum of absolute values. It might differ from country to country... I'll edit
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:22












  • $begingroup$
    Ah, that's a small problem. CVX or CVXPY is a good thing to try at first. If it's not fast enough, you could formulate the problem as a linear program and use some good LP solver such as Mosek.
    $endgroup$
    – littleO
    Jul 10 '18 at 16:23














4












4








4


0



$begingroup$


The ordinary least squares (OLS) method is very useful. It gives you the solution to the problem



$$ arg min_{x} {left| A x - b right|}_{2}^{2} $$



Now, if the problem is the same, but the $1$-norm is used instead:



$$ arg min_{x} {left| A x - b right|}_{1} $$



Is there a known (approximate or not) solution to that problem? Any time efficient algorithm to get this optimum?
I've read of the Theil-Sen estimator which should do the trick in dimension $2$, and some multidimentionnal extension of it, but the algorithm computation time increases hugely with dimension, I don't think I'll get any solution before a year if I use that.










share|cite|improve this question











$endgroup$




The ordinary least squares (OLS) method is very useful. It gives you the solution to the problem



$$ arg min_{x} {left| A x - b right|}_{2}^{2} $$



Now, if the problem is the same, but the $1$-norm is used instead:



$$ arg min_{x} {left| A x - b right|}_{1} $$



Is there a known (approximate or not) solution to that problem? Any time efficient algorithm to get this optimum?
I've read of the Theil-Sen estimator which should do the trick in dimension $2$, and some multidimentionnal extension of it, but the algorithm computation time increases hugely with dimension, I don't think I'll get any solution before a year if I use that.







convex-optimization linear-programming least-squares median






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jul 27 '18 at 18:53









Royi

3,49012352




3,49012352










asked Jul 10 '18 at 15:32









PierrePierre

1214




1214












  • $begingroup$
    There is no known formula for the solution, as there is for least squares problems. Minimizing $| Ax - b |_1$ is a convex problem and can be solved with standard convex optimization algorithms. An easy way to do it is using the CVX or CVXPY software package, which will let you solve small instances of this problem (with a few hundred variables perhaps) using just a few lines of readable code. How large is your problem?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:19










  • $begingroup$
    Is that supposed to be $ell_1$ rather than $N_1$ in the title?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:20










  • $begingroup$
    @littleO : I will investigate this. The data size is ~20 variables and ~1000 observations; but needs to be done in a loop, thus needs to be fast. Thanks!
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:21










  • $begingroup$
    @littleO : im my maths curses, we used N1 as the name of the "norm 1", or the sum of absolute values. It might differ from country to country... I'll edit
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:22












  • $begingroup$
    Ah, that's a small problem. CVX or CVXPY is a good thing to try at first. If it's not fast enough, you could formulate the problem as a linear program and use some good LP solver such as Mosek.
    $endgroup$
    – littleO
    Jul 10 '18 at 16:23


















  • $begingroup$
    There is no known formula for the solution, as there is for least squares problems. Minimizing $| Ax - b |_1$ is a convex problem and can be solved with standard convex optimization algorithms. An easy way to do it is using the CVX or CVXPY software package, which will let you solve small instances of this problem (with a few hundred variables perhaps) using just a few lines of readable code. How large is your problem?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:19










  • $begingroup$
    Is that supposed to be $ell_1$ rather than $N_1$ in the title?
    $endgroup$
    – littleO
    Jul 10 '18 at 16:20










  • $begingroup$
    @littleO : I will investigate this. The data size is ~20 variables and ~1000 observations; but needs to be done in a loop, thus needs to be fast. Thanks!
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:21










  • $begingroup$
    @littleO : im my maths curses, we used N1 as the name of the "norm 1", or the sum of absolute values. It might differ from country to country... I'll edit
    $endgroup$
    – Pierre
    Jul 10 '18 at 16:22












  • $begingroup$
    Ah, that's a small problem. CVX or CVXPY is a good thing to try at first. If it's not fast enough, you could formulate the problem as a linear program and use some good LP solver such as Mosek.
    $endgroup$
    – littleO
    Jul 10 '18 at 16:23
















$begingroup$
There is no known formula for the solution, as there is for least squares problems. Minimizing $| Ax - b |_1$ is a convex problem and can be solved with standard convex optimization algorithms. An easy way to do it is using the CVX or CVXPY software package, which will let you solve small instances of this problem (with a few hundred variables perhaps) using just a few lines of readable code. How large is your problem?
$endgroup$
– littleO
Jul 10 '18 at 16:19




$begingroup$
There is no known formula for the solution, as there is for least squares problems. Minimizing $| Ax - b |_1$ is a convex problem and can be solved with standard convex optimization algorithms. An easy way to do it is using the CVX or CVXPY software package, which will let you solve small instances of this problem (with a few hundred variables perhaps) using just a few lines of readable code. How large is your problem?
$endgroup$
– littleO
Jul 10 '18 at 16:19












$begingroup$
Is that supposed to be $ell_1$ rather than $N_1$ in the title?
$endgroup$
– littleO
Jul 10 '18 at 16:20




$begingroup$
Is that supposed to be $ell_1$ rather than $N_1$ in the title?
$endgroup$
– littleO
Jul 10 '18 at 16:20












$begingroup$
@littleO : I will investigate this. The data size is ~20 variables and ~1000 observations; but needs to be done in a loop, thus needs to be fast. Thanks!
$endgroup$
– Pierre
Jul 10 '18 at 16:21




$begingroup$
@littleO : I will investigate this. The data size is ~20 variables and ~1000 observations; but needs to be done in a loop, thus needs to be fast. Thanks!
$endgroup$
– Pierre
Jul 10 '18 at 16:21












$begingroup$
@littleO : im my maths curses, we used N1 as the name of the "norm 1", or the sum of absolute values. It might differ from country to country... I'll edit
$endgroup$
– Pierre
Jul 10 '18 at 16:22






$begingroup$
@littleO : im my maths curses, we used N1 as the name of the "norm 1", or the sum of absolute values. It might differ from country to country... I'll edit
$endgroup$
– Pierre
Jul 10 '18 at 16:22














$begingroup$
Ah, that's a small problem. CVX or CVXPY is a good thing to try at first. If it's not fast enough, you could formulate the problem as a linear program and use some good LP solver such as Mosek.
$endgroup$
– littleO
Jul 10 '18 at 16:23




$begingroup$
Ah, that's a small problem. CVX or CVXPY is a good thing to try at first. If it's not fast enough, you could formulate the problem as a linear program and use some good LP solver such as Mosek.
$endgroup$
– littleO
Jul 10 '18 at 16:23










2 Answers
2






active

oldest

votes


















1












$begingroup$

The problem is given by:



$$ arg min_{x} {left| A x - b right|}_{1} $$



If we define $ r = A x - b $ we can rewrite the problem as:



$$begin{align*}
arg min_{x, t} quad & boldsymbol{1}^{T} t \
text{subject to} quad &-t preceq A x - b preceq t
end{align*}$$



This can be easily solved by any Linear Programming solver.



You may see some more related approaches in my answer to Mathematics Q1639716 - How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming?






share|cite|improve this answer









$endgroup$





















    0












    $begingroup$

    Not equivalent, but one approach is iteratively re-weighted least squares, where you solve a sequence of ordinary least squares problems



    $$x_{i}= min_x{|W_i(Ax-b)|_2}$$



    And re-weighting means to calculate $W_{i+1}$ based on $x_i$



    Basically a loop




    1. Calculate $W_k$ based on previous solution

    2. Calculate $x_k$

    3. If $|x_k-x_{k-1}|>epsilon$ loop back to 1.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2846737%2fis-there-an-l-1-norm-equivalent-to-ordinary-least-squares%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1












      $begingroup$

      The problem is given by:



      $$ arg min_{x} {left| A x - b right|}_{1} $$



      If we define $ r = A x - b $ we can rewrite the problem as:



      $$begin{align*}
      arg min_{x, t} quad & boldsymbol{1}^{T} t \
      text{subject to} quad &-t preceq A x - b preceq t
      end{align*}$$



      This can be easily solved by any Linear Programming solver.



      You may see some more related approaches in my answer to Mathematics Q1639716 - How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming?






      share|cite|improve this answer









      $endgroup$


















        1












        $begingroup$

        The problem is given by:



        $$ arg min_{x} {left| A x - b right|}_{1} $$



        If we define $ r = A x - b $ we can rewrite the problem as:



        $$begin{align*}
        arg min_{x, t} quad & boldsymbol{1}^{T} t \
        text{subject to} quad &-t preceq A x - b preceq t
        end{align*}$$



        This can be easily solved by any Linear Programming solver.



        You may see some more related approaches in my answer to Mathematics Q1639716 - How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming?






        share|cite|improve this answer









        $endgroup$
















          1












          1








          1





          $begingroup$

          The problem is given by:



          $$ arg min_{x} {left| A x - b right|}_{1} $$



          If we define $ r = A x - b $ we can rewrite the problem as:



          $$begin{align*}
          arg min_{x, t} quad & boldsymbol{1}^{T} t \
          text{subject to} quad &-t preceq A x - b preceq t
          end{align*}$$



          This can be easily solved by any Linear Programming solver.



          You may see some more related approaches in my answer to Mathematics Q1639716 - How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming?






          share|cite|improve this answer









          $endgroup$



          The problem is given by:



          $$ arg min_{x} {left| A x - b right|}_{1} $$



          If we define $ r = A x - b $ we can rewrite the problem as:



          $$begin{align*}
          arg min_{x, t} quad & boldsymbol{1}^{T} t \
          text{subject to} quad &-t preceq A x - b preceq t
          end{align*}$$



          This can be easily solved by any Linear Programming solver.



          You may see some more related approaches in my answer to Mathematics Q1639716 - How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming?







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jul 27 '18 at 16:54









          RoyiRoyi

          3,49012352




          3,49012352























              0












              $begingroup$

              Not equivalent, but one approach is iteratively re-weighted least squares, where you solve a sequence of ordinary least squares problems



              $$x_{i}= min_x{|W_i(Ax-b)|_2}$$



              And re-weighting means to calculate $W_{i+1}$ based on $x_i$



              Basically a loop




              1. Calculate $W_k$ based on previous solution

              2. Calculate $x_k$

              3. If $|x_k-x_{k-1}|>epsilon$ loop back to 1.






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                Not equivalent, but one approach is iteratively re-weighted least squares, where you solve a sequence of ordinary least squares problems



                $$x_{i}= min_x{|W_i(Ax-b)|_2}$$



                And re-weighting means to calculate $W_{i+1}$ based on $x_i$



                Basically a loop




                1. Calculate $W_k$ based on previous solution

                2. Calculate $x_k$

                3. If $|x_k-x_{k-1}|>epsilon$ loop back to 1.






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  Not equivalent, but one approach is iteratively re-weighted least squares, where you solve a sequence of ordinary least squares problems



                  $$x_{i}= min_x{|W_i(Ax-b)|_2}$$



                  And re-weighting means to calculate $W_{i+1}$ based on $x_i$



                  Basically a loop




                  1. Calculate $W_k$ based on previous solution

                  2. Calculate $x_k$

                  3. If $|x_k-x_{k-1}|>epsilon$ loop back to 1.






                  share|cite|improve this answer









                  $endgroup$



                  Not equivalent, but one approach is iteratively re-weighted least squares, where you solve a sequence of ordinary least squares problems



                  $$x_{i}= min_x{|W_i(Ax-b)|_2}$$



                  And re-weighting means to calculate $W_{i+1}$ based on $x_i$



                  Basically a loop




                  1. Calculate $W_k$ based on previous solution

                  2. Calculate $x_k$

                  3. If $|x_k-x_{k-1}|>epsilon$ loop back to 1.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Dec 9 '18 at 21:52









                  mathreadlermathreadler

                  14.9k72262




                  14.9k72262






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2846737%2fis-there-an-l-1-norm-equivalent-to-ordinary-least-squares%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Plaza Victoria

                      Puebla de Zaragoza

                      Musa