Linear dependence lemma - non-zero vector












2












$begingroup$


The theorem states that if $(v_1, ...,v_m)$ is linearly dependent in $V$ and $v_1 neq 0$ then there exists $j in {2,...,m}$ such that $v_j in span(v_1,...,v_{j-1})$.



If $(v_1, v_2, ..., v_m)$ are linearly dependent, then by definition of linear dependence, at least one of these vectors can be expressed as a linear combination of remaining vectors.



To be more precise, vectors are linearly dependent if not all scalars $a_i$ have to be zero in this equality:



$$a_1 v_1 + a_2 v_2 + ... + a_n v_n = 0$$



For instance if
$0v_1+0v_2+3v_3=0$, then $v_2=-3v_3$. So you just pick the vector $v_j$ that's associated with non-zero $a_j$, subtract everything else from both sides and get $v_j$ expressed as a linear combination of remaining vectors. Therefore $v_j in span(v_1,...,v_{j-1})$, by definition of span.



I just don't understand the requirement that $v_1 ne 0$. It always works, no matter if $v_1$ is zero or not.










share|cite|improve this question









$endgroup$

















    2












    $begingroup$


    The theorem states that if $(v_1, ...,v_m)$ is linearly dependent in $V$ and $v_1 neq 0$ then there exists $j in {2,...,m}$ such that $v_j in span(v_1,...,v_{j-1})$.



    If $(v_1, v_2, ..., v_m)$ are linearly dependent, then by definition of linear dependence, at least one of these vectors can be expressed as a linear combination of remaining vectors.



    To be more precise, vectors are linearly dependent if not all scalars $a_i$ have to be zero in this equality:



    $$a_1 v_1 + a_2 v_2 + ... + a_n v_n = 0$$



    For instance if
    $0v_1+0v_2+3v_3=0$, then $v_2=-3v_3$. So you just pick the vector $v_j$ that's associated with non-zero $a_j$, subtract everything else from both sides and get $v_j$ expressed as a linear combination of remaining vectors. Therefore $v_j in span(v_1,...,v_{j-1})$, by definition of span.



    I just don't understand the requirement that $v_1 ne 0$. It always works, no matter if $v_1$ is zero or not.










    share|cite|improve this question









    $endgroup$















      2












      2








      2


      1



      $begingroup$


      The theorem states that if $(v_1, ...,v_m)$ is linearly dependent in $V$ and $v_1 neq 0$ then there exists $j in {2,...,m}$ such that $v_j in span(v_1,...,v_{j-1})$.



      If $(v_1, v_2, ..., v_m)$ are linearly dependent, then by definition of linear dependence, at least one of these vectors can be expressed as a linear combination of remaining vectors.



      To be more precise, vectors are linearly dependent if not all scalars $a_i$ have to be zero in this equality:



      $$a_1 v_1 + a_2 v_2 + ... + a_n v_n = 0$$



      For instance if
      $0v_1+0v_2+3v_3=0$, then $v_2=-3v_3$. So you just pick the vector $v_j$ that's associated with non-zero $a_j$, subtract everything else from both sides and get $v_j$ expressed as a linear combination of remaining vectors. Therefore $v_j in span(v_1,...,v_{j-1})$, by definition of span.



      I just don't understand the requirement that $v_1 ne 0$. It always works, no matter if $v_1$ is zero or not.










      share|cite|improve this question









      $endgroup$




      The theorem states that if $(v_1, ...,v_m)$ is linearly dependent in $V$ and $v_1 neq 0$ then there exists $j in {2,...,m}$ such that $v_j in span(v_1,...,v_{j-1})$.



      If $(v_1, v_2, ..., v_m)$ are linearly dependent, then by definition of linear dependence, at least one of these vectors can be expressed as a linear combination of remaining vectors.



      To be more precise, vectors are linearly dependent if not all scalars $a_i$ have to be zero in this equality:



      $$a_1 v_1 + a_2 v_2 + ... + a_n v_n = 0$$



      For instance if
      $0v_1+0v_2+3v_3=0$, then $v_2=-3v_3$. So you just pick the vector $v_j$ that's associated with non-zero $a_j$, subtract everything else from both sides and get $v_j$ expressed as a linear combination of remaining vectors. Therefore $v_j in span(v_1,...,v_{j-1})$, by definition of span.



      I just don't understand the requirement that $v_1 ne 0$. It always works, no matter if $v_1$ is zero or not.







      linear-algebra






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 16 '16 at 16:00









      user4205580user4205580

      4711132




      4711132






















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          The requirement that $v_1 neq 0$ is necessary. For instance, let $V = mathbb{R}^2$, $v_1 = 0$ and $v_2 = textbf{i}$, which form a linearly dependent set.



          Then the only $j$ you can pick is $2$, but $v_2 notin <v_1>$.



          Furthermore, note that your proof as written is not correct. You have to express the $v_j$ as a linear combination of $v_i$s where $i< j$. But, there is no reason to suppose that you don't use $v_i$s with $i>j$ with your construction.



          I'll show one way to do the argument correctly.



          Let $V$ be a vector space, $v_1,ldots,v_min V$ linearly dependent with $v_1 = 0$. Then by linear dependence there exists scalars $c_1,ldots,c_k$ not all $0$ so that $$c_1v_1 + ldots + c_mv_m = 0$$



          Let $iin{1,ldots,m}$ be the greatest index so that $c_i neq 0$. If $i=1$ then we have
          $$ c_1 v_1 = 0$$
          and so, $v_1 = 0$ contradicting our hypothesis. Hence $i > 1$, and therefore
          $$
          c_1 v_1 + cdots + c_i v_i = 0
          $$
          and thus $c_iv_i = -(c_1v_1 + cdots + c_{i-1} v_{i-1})$. By definition of $i$, $c_i neq 0$ and so $$v_i = frac{-1}{c_i}(c_1v_1 + cdots + c_{i-1}v_{i-1})$$
          Consequently $v_i in text{span}(v_1,ldots,v_{i-1})$ and so $i$ an example of the required $j$.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            'If $i=1$ then we have $ c_1 v_1 = 0$ and so, $v_1 = 0$ contradicting our hypothesis.' - I don't see any contradiction here. Could you explain?
            $endgroup$
            – user4205580
            Jan 16 '16 at 16:55










          • $begingroup$
            We assumed $v_1neq 0$
            $endgroup$
            – James
            Jan 16 '16 at 16:59










          • $begingroup$
            Okay, but the point was to show that assumption $v_1 ne 0$ is necessary. Is it necessary because $c_1 v_1 = 0$ implies that $v_1$ cannot be expressed as a linear combination of vectors whose index is less than $1$, simply because there are no such vectors (and the theorem says every such vector CAN be expressed that way)? By assuming $v_1 ne 0$ we eliminate this problem - I think I get it (or not)?
            $endgroup$
            – user4205580
            Jan 16 '16 at 17:02












          • $begingroup$
            No, you don't get it. There are two issues, the first is that the proof you give of the proposition is incorrect, this is addressed by everything after "I'll show one way to do the argument correctly." The second issue is whether or not you need the hypothesis that $v_1neq 0$, I showed that you do in the first few paragraphs by giving an example of a situation which satisfies every hypothesis except "$v_1neq 0$" and showed that the conclusion is not true, in the example I give.
            $endgroup$
            – James
            Jan 16 '16 at 17:09












          • $begingroup$
            I think you are confusing a few issues. I used the hypothesis that $v_1 neq 0$ in my proof of the proposition. This does not mean that the theorem needs this hypothesis. To show that a hypothesis is necessary for a theorem to hold requires you to give a counter-example to the theorem when the hypothesis doesn't hold.
            $endgroup$
            – James
            Jan 16 '16 at 17:22



















          2












          $begingroup$

          A $0$ vector can never be part of a linearly independent list of vectors. In other words, a list of vectors containing a $0$ vector is always linearly dependent.



          Therefore consider a list of vectors $(v_2, v_3, ... , v_n)$ which are linearly independent. Now $(0, v_2, v_3, ... , v_n)$ is linearly dependent, however there exists no $j in {2,...,n}$ such that $v_j in span(0,v_2,...,v_{j-1})$ by the choice of the list $(v_2, v_3, ... , v_n)$.



          I guess you are mostly uncomfortable with the special treatment of $v_1$. The lemma could've been stated as for a list of linearly dependent vectors, there exists at least one vector which belongs to the span of the rest of the vectors and in that case there is no need of the special treatment of the first vector. This is because the $0$ vector can always be expressed as a linear combination of other vectors (a linear combination where all the coefficients are $0 in mathbb{F}$). However there are a lot to be gained by this particular way of stating the lemma as it makes some other proofs really easy and elegant.



          Some books do not pose any restriction on $v_1$ and just states that $exists j$ such that $v_j in span(v_1, ..., v_{j-1})$. Now this would mean if $j=1$, then $v_1 in span()$. In this case they define $span()$ as the empty vector space or ${0}$. So naturally $v_1 in span() implies v_1 = 0$.



          Hope this answers your question.






          share|cite|improve this answer









          $endgroup$














            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1614552%2flinear-dependence-lemma-non-zero-vector%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            The requirement that $v_1 neq 0$ is necessary. For instance, let $V = mathbb{R}^2$, $v_1 = 0$ and $v_2 = textbf{i}$, which form a linearly dependent set.



            Then the only $j$ you can pick is $2$, but $v_2 notin <v_1>$.



            Furthermore, note that your proof as written is not correct. You have to express the $v_j$ as a linear combination of $v_i$s where $i< j$. But, there is no reason to suppose that you don't use $v_i$s with $i>j$ with your construction.



            I'll show one way to do the argument correctly.



            Let $V$ be a vector space, $v_1,ldots,v_min V$ linearly dependent with $v_1 = 0$. Then by linear dependence there exists scalars $c_1,ldots,c_k$ not all $0$ so that $$c_1v_1 + ldots + c_mv_m = 0$$



            Let $iin{1,ldots,m}$ be the greatest index so that $c_i neq 0$. If $i=1$ then we have
            $$ c_1 v_1 = 0$$
            and so, $v_1 = 0$ contradicting our hypothesis. Hence $i > 1$, and therefore
            $$
            c_1 v_1 + cdots + c_i v_i = 0
            $$
            and thus $c_iv_i = -(c_1v_1 + cdots + c_{i-1} v_{i-1})$. By definition of $i$, $c_i neq 0$ and so $$v_i = frac{-1}{c_i}(c_1v_1 + cdots + c_{i-1}v_{i-1})$$
            Consequently $v_i in text{span}(v_1,ldots,v_{i-1})$ and so $i$ an example of the required $j$.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              'If $i=1$ then we have $ c_1 v_1 = 0$ and so, $v_1 = 0$ contradicting our hypothesis.' - I don't see any contradiction here. Could you explain?
              $endgroup$
              – user4205580
              Jan 16 '16 at 16:55










            • $begingroup$
              We assumed $v_1neq 0$
              $endgroup$
              – James
              Jan 16 '16 at 16:59










            • $begingroup$
              Okay, but the point was to show that assumption $v_1 ne 0$ is necessary. Is it necessary because $c_1 v_1 = 0$ implies that $v_1$ cannot be expressed as a linear combination of vectors whose index is less than $1$, simply because there are no such vectors (and the theorem says every such vector CAN be expressed that way)? By assuming $v_1 ne 0$ we eliminate this problem - I think I get it (or not)?
              $endgroup$
              – user4205580
              Jan 16 '16 at 17:02












            • $begingroup$
              No, you don't get it. There are two issues, the first is that the proof you give of the proposition is incorrect, this is addressed by everything after "I'll show one way to do the argument correctly." The second issue is whether or not you need the hypothesis that $v_1neq 0$, I showed that you do in the first few paragraphs by giving an example of a situation which satisfies every hypothesis except "$v_1neq 0$" and showed that the conclusion is not true, in the example I give.
              $endgroup$
              – James
              Jan 16 '16 at 17:09












            • $begingroup$
              I think you are confusing a few issues. I used the hypothesis that $v_1 neq 0$ in my proof of the proposition. This does not mean that the theorem needs this hypothesis. To show that a hypothesis is necessary for a theorem to hold requires you to give a counter-example to the theorem when the hypothesis doesn't hold.
              $endgroup$
              – James
              Jan 16 '16 at 17:22
















            2












            $begingroup$

            The requirement that $v_1 neq 0$ is necessary. For instance, let $V = mathbb{R}^2$, $v_1 = 0$ and $v_2 = textbf{i}$, which form a linearly dependent set.



            Then the only $j$ you can pick is $2$, but $v_2 notin <v_1>$.



            Furthermore, note that your proof as written is not correct. You have to express the $v_j$ as a linear combination of $v_i$s where $i< j$. But, there is no reason to suppose that you don't use $v_i$s with $i>j$ with your construction.



            I'll show one way to do the argument correctly.



            Let $V$ be a vector space, $v_1,ldots,v_min V$ linearly dependent with $v_1 = 0$. Then by linear dependence there exists scalars $c_1,ldots,c_k$ not all $0$ so that $$c_1v_1 + ldots + c_mv_m = 0$$



            Let $iin{1,ldots,m}$ be the greatest index so that $c_i neq 0$. If $i=1$ then we have
            $$ c_1 v_1 = 0$$
            and so, $v_1 = 0$ contradicting our hypothesis. Hence $i > 1$, and therefore
            $$
            c_1 v_1 + cdots + c_i v_i = 0
            $$
            and thus $c_iv_i = -(c_1v_1 + cdots + c_{i-1} v_{i-1})$. By definition of $i$, $c_i neq 0$ and so $$v_i = frac{-1}{c_i}(c_1v_1 + cdots + c_{i-1}v_{i-1})$$
            Consequently $v_i in text{span}(v_1,ldots,v_{i-1})$ and so $i$ an example of the required $j$.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              'If $i=1$ then we have $ c_1 v_1 = 0$ and so, $v_1 = 0$ contradicting our hypothesis.' - I don't see any contradiction here. Could you explain?
              $endgroup$
              – user4205580
              Jan 16 '16 at 16:55










            • $begingroup$
              We assumed $v_1neq 0$
              $endgroup$
              – James
              Jan 16 '16 at 16:59










            • $begingroup$
              Okay, but the point was to show that assumption $v_1 ne 0$ is necessary. Is it necessary because $c_1 v_1 = 0$ implies that $v_1$ cannot be expressed as a linear combination of vectors whose index is less than $1$, simply because there are no such vectors (and the theorem says every such vector CAN be expressed that way)? By assuming $v_1 ne 0$ we eliminate this problem - I think I get it (or not)?
              $endgroup$
              – user4205580
              Jan 16 '16 at 17:02












            • $begingroup$
              No, you don't get it. There are two issues, the first is that the proof you give of the proposition is incorrect, this is addressed by everything after "I'll show one way to do the argument correctly." The second issue is whether or not you need the hypothesis that $v_1neq 0$, I showed that you do in the first few paragraphs by giving an example of a situation which satisfies every hypothesis except "$v_1neq 0$" and showed that the conclusion is not true, in the example I give.
              $endgroup$
              – James
              Jan 16 '16 at 17:09












            • $begingroup$
              I think you are confusing a few issues. I used the hypothesis that $v_1 neq 0$ in my proof of the proposition. This does not mean that the theorem needs this hypothesis. To show that a hypothesis is necessary for a theorem to hold requires you to give a counter-example to the theorem when the hypothesis doesn't hold.
              $endgroup$
              – James
              Jan 16 '16 at 17:22














            2












            2








            2





            $begingroup$

            The requirement that $v_1 neq 0$ is necessary. For instance, let $V = mathbb{R}^2$, $v_1 = 0$ and $v_2 = textbf{i}$, which form a linearly dependent set.



            Then the only $j$ you can pick is $2$, but $v_2 notin <v_1>$.



            Furthermore, note that your proof as written is not correct. You have to express the $v_j$ as a linear combination of $v_i$s where $i< j$. But, there is no reason to suppose that you don't use $v_i$s with $i>j$ with your construction.



            I'll show one way to do the argument correctly.



            Let $V$ be a vector space, $v_1,ldots,v_min V$ linearly dependent with $v_1 = 0$. Then by linear dependence there exists scalars $c_1,ldots,c_k$ not all $0$ so that $$c_1v_1 + ldots + c_mv_m = 0$$



            Let $iin{1,ldots,m}$ be the greatest index so that $c_i neq 0$. If $i=1$ then we have
            $$ c_1 v_1 = 0$$
            and so, $v_1 = 0$ contradicting our hypothesis. Hence $i > 1$, and therefore
            $$
            c_1 v_1 + cdots + c_i v_i = 0
            $$
            and thus $c_iv_i = -(c_1v_1 + cdots + c_{i-1} v_{i-1})$. By definition of $i$, $c_i neq 0$ and so $$v_i = frac{-1}{c_i}(c_1v_1 + cdots + c_{i-1}v_{i-1})$$
            Consequently $v_i in text{span}(v_1,ldots,v_{i-1})$ and so $i$ an example of the required $j$.






            share|cite|improve this answer











            $endgroup$



            The requirement that $v_1 neq 0$ is necessary. For instance, let $V = mathbb{R}^2$, $v_1 = 0$ and $v_2 = textbf{i}$, which form a linearly dependent set.



            Then the only $j$ you can pick is $2$, but $v_2 notin <v_1>$.



            Furthermore, note that your proof as written is not correct. You have to express the $v_j$ as a linear combination of $v_i$s where $i< j$. But, there is no reason to suppose that you don't use $v_i$s with $i>j$ with your construction.



            I'll show one way to do the argument correctly.



            Let $V$ be a vector space, $v_1,ldots,v_min V$ linearly dependent with $v_1 = 0$. Then by linear dependence there exists scalars $c_1,ldots,c_k$ not all $0$ so that $$c_1v_1 + ldots + c_mv_m = 0$$



            Let $iin{1,ldots,m}$ be the greatest index so that $c_i neq 0$. If $i=1$ then we have
            $$ c_1 v_1 = 0$$
            and so, $v_1 = 0$ contradicting our hypothesis. Hence $i > 1$, and therefore
            $$
            c_1 v_1 + cdots + c_i v_i = 0
            $$
            and thus $c_iv_i = -(c_1v_1 + cdots + c_{i-1} v_{i-1})$. By definition of $i$, $c_i neq 0$ and so $$v_i = frac{-1}{c_i}(c_1v_1 + cdots + c_{i-1}v_{i-1})$$
            Consequently $v_i in text{span}(v_1,ldots,v_{i-1})$ and so $i$ an example of the required $j$.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Jan 16 '16 at 16:23

























            answered Jan 16 '16 at 16:09









            JamesJames

            4,4651822




            4,4651822












            • $begingroup$
              'If $i=1$ then we have $ c_1 v_1 = 0$ and so, $v_1 = 0$ contradicting our hypothesis.' - I don't see any contradiction here. Could you explain?
              $endgroup$
              – user4205580
              Jan 16 '16 at 16:55










            • $begingroup$
              We assumed $v_1neq 0$
              $endgroup$
              – James
              Jan 16 '16 at 16:59










            • $begingroup$
              Okay, but the point was to show that assumption $v_1 ne 0$ is necessary. Is it necessary because $c_1 v_1 = 0$ implies that $v_1$ cannot be expressed as a linear combination of vectors whose index is less than $1$, simply because there are no such vectors (and the theorem says every such vector CAN be expressed that way)? By assuming $v_1 ne 0$ we eliminate this problem - I think I get it (or not)?
              $endgroup$
              – user4205580
              Jan 16 '16 at 17:02












            • $begingroup$
              No, you don't get it. There are two issues, the first is that the proof you give of the proposition is incorrect, this is addressed by everything after "I'll show one way to do the argument correctly." The second issue is whether or not you need the hypothesis that $v_1neq 0$, I showed that you do in the first few paragraphs by giving an example of a situation which satisfies every hypothesis except "$v_1neq 0$" and showed that the conclusion is not true, in the example I give.
              $endgroup$
              – James
              Jan 16 '16 at 17:09












            • $begingroup$
              I think you are confusing a few issues. I used the hypothesis that $v_1 neq 0$ in my proof of the proposition. This does not mean that the theorem needs this hypothesis. To show that a hypothesis is necessary for a theorem to hold requires you to give a counter-example to the theorem when the hypothesis doesn't hold.
              $endgroup$
              – James
              Jan 16 '16 at 17:22


















            • $begingroup$
              'If $i=1$ then we have $ c_1 v_1 = 0$ and so, $v_1 = 0$ contradicting our hypothesis.' - I don't see any contradiction here. Could you explain?
              $endgroup$
              – user4205580
              Jan 16 '16 at 16:55










            • $begingroup$
              We assumed $v_1neq 0$
              $endgroup$
              – James
              Jan 16 '16 at 16:59










            • $begingroup$
              Okay, but the point was to show that assumption $v_1 ne 0$ is necessary. Is it necessary because $c_1 v_1 = 0$ implies that $v_1$ cannot be expressed as a linear combination of vectors whose index is less than $1$, simply because there are no such vectors (and the theorem says every such vector CAN be expressed that way)? By assuming $v_1 ne 0$ we eliminate this problem - I think I get it (or not)?
              $endgroup$
              – user4205580
              Jan 16 '16 at 17:02












            • $begingroup$
              No, you don't get it. There are two issues, the first is that the proof you give of the proposition is incorrect, this is addressed by everything after "I'll show one way to do the argument correctly." The second issue is whether or not you need the hypothesis that $v_1neq 0$, I showed that you do in the first few paragraphs by giving an example of a situation which satisfies every hypothesis except "$v_1neq 0$" and showed that the conclusion is not true, in the example I give.
              $endgroup$
              – James
              Jan 16 '16 at 17:09












            • $begingroup$
              I think you are confusing a few issues. I used the hypothesis that $v_1 neq 0$ in my proof of the proposition. This does not mean that the theorem needs this hypothesis. To show that a hypothesis is necessary for a theorem to hold requires you to give a counter-example to the theorem when the hypothesis doesn't hold.
              $endgroup$
              – James
              Jan 16 '16 at 17:22
















            $begingroup$
            'If $i=1$ then we have $ c_1 v_1 = 0$ and so, $v_1 = 0$ contradicting our hypothesis.' - I don't see any contradiction here. Could you explain?
            $endgroup$
            – user4205580
            Jan 16 '16 at 16:55




            $begingroup$
            'If $i=1$ then we have $ c_1 v_1 = 0$ and so, $v_1 = 0$ contradicting our hypothesis.' - I don't see any contradiction here. Could you explain?
            $endgroup$
            – user4205580
            Jan 16 '16 at 16:55












            $begingroup$
            We assumed $v_1neq 0$
            $endgroup$
            – James
            Jan 16 '16 at 16:59




            $begingroup$
            We assumed $v_1neq 0$
            $endgroup$
            – James
            Jan 16 '16 at 16:59












            $begingroup$
            Okay, but the point was to show that assumption $v_1 ne 0$ is necessary. Is it necessary because $c_1 v_1 = 0$ implies that $v_1$ cannot be expressed as a linear combination of vectors whose index is less than $1$, simply because there are no such vectors (and the theorem says every such vector CAN be expressed that way)? By assuming $v_1 ne 0$ we eliminate this problem - I think I get it (or not)?
            $endgroup$
            – user4205580
            Jan 16 '16 at 17:02






            $begingroup$
            Okay, but the point was to show that assumption $v_1 ne 0$ is necessary. Is it necessary because $c_1 v_1 = 0$ implies that $v_1$ cannot be expressed as a linear combination of vectors whose index is less than $1$, simply because there are no such vectors (and the theorem says every such vector CAN be expressed that way)? By assuming $v_1 ne 0$ we eliminate this problem - I think I get it (or not)?
            $endgroup$
            – user4205580
            Jan 16 '16 at 17:02














            $begingroup$
            No, you don't get it. There are two issues, the first is that the proof you give of the proposition is incorrect, this is addressed by everything after "I'll show one way to do the argument correctly." The second issue is whether or not you need the hypothesis that $v_1neq 0$, I showed that you do in the first few paragraphs by giving an example of a situation which satisfies every hypothesis except "$v_1neq 0$" and showed that the conclusion is not true, in the example I give.
            $endgroup$
            – James
            Jan 16 '16 at 17:09






            $begingroup$
            No, you don't get it. There are two issues, the first is that the proof you give of the proposition is incorrect, this is addressed by everything after "I'll show one way to do the argument correctly." The second issue is whether or not you need the hypothesis that $v_1neq 0$, I showed that you do in the first few paragraphs by giving an example of a situation which satisfies every hypothesis except "$v_1neq 0$" and showed that the conclusion is not true, in the example I give.
            $endgroup$
            – James
            Jan 16 '16 at 17:09














            $begingroup$
            I think you are confusing a few issues. I used the hypothesis that $v_1 neq 0$ in my proof of the proposition. This does not mean that the theorem needs this hypothesis. To show that a hypothesis is necessary for a theorem to hold requires you to give a counter-example to the theorem when the hypothesis doesn't hold.
            $endgroup$
            – James
            Jan 16 '16 at 17:22




            $begingroup$
            I think you are confusing a few issues. I used the hypothesis that $v_1 neq 0$ in my proof of the proposition. This does not mean that the theorem needs this hypothesis. To show that a hypothesis is necessary for a theorem to hold requires you to give a counter-example to the theorem when the hypothesis doesn't hold.
            $endgroup$
            – James
            Jan 16 '16 at 17:22











            2












            $begingroup$

            A $0$ vector can never be part of a linearly independent list of vectors. In other words, a list of vectors containing a $0$ vector is always linearly dependent.



            Therefore consider a list of vectors $(v_2, v_3, ... , v_n)$ which are linearly independent. Now $(0, v_2, v_3, ... , v_n)$ is linearly dependent, however there exists no $j in {2,...,n}$ such that $v_j in span(0,v_2,...,v_{j-1})$ by the choice of the list $(v_2, v_3, ... , v_n)$.



            I guess you are mostly uncomfortable with the special treatment of $v_1$. The lemma could've been stated as for a list of linearly dependent vectors, there exists at least one vector which belongs to the span of the rest of the vectors and in that case there is no need of the special treatment of the first vector. This is because the $0$ vector can always be expressed as a linear combination of other vectors (a linear combination where all the coefficients are $0 in mathbb{F}$). However there are a lot to be gained by this particular way of stating the lemma as it makes some other proofs really easy and elegant.



            Some books do not pose any restriction on $v_1$ and just states that $exists j$ such that $v_j in span(v_1, ..., v_{j-1})$. Now this would mean if $j=1$, then $v_1 in span()$. In this case they define $span()$ as the empty vector space or ${0}$. So naturally $v_1 in span() implies v_1 = 0$.



            Hope this answers your question.






            share|cite|improve this answer









            $endgroup$


















              2












              $begingroup$

              A $0$ vector can never be part of a linearly independent list of vectors. In other words, a list of vectors containing a $0$ vector is always linearly dependent.



              Therefore consider a list of vectors $(v_2, v_3, ... , v_n)$ which are linearly independent. Now $(0, v_2, v_3, ... , v_n)$ is linearly dependent, however there exists no $j in {2,...,n}$ such that $v_j in span(0,v_2,...,v_{j-1})$ by the choice of the list $(v_2, v_3, ... , v_n)$.



              I guess you are mostly uncomfortable with the special treatment of $v_1$. The lemma could've been stated as for a list of linearly dependent vectors, there exists at least one vector which belongs to the span of the rest of the vectors and in that case there is no need of the special treatment of the first vector. This is because the $0$ vector can always be expressed as a linear combination of other vectors (a linear combination where all the coefficients are $0 in mathbb{F}$). However there are a lot to be gained by this particular way of stating the lemma as it makes some other proofs really easy and elegant.



              Some books do not pose any restriction on $v_1$ and just states that $exists j$ such that $v_j in span(v_1, ..., v_{j-1})$. Now this would mean if $j=1$, then $v_1 in span()$. In this case they define $span()$ as the empty vector space or ${0}$. So naturally $v_1 in span() implies v_1 = 0$.



              Hope this answers your question.






              share|cite|improve this answer









              $endgroup$
















                2












                2








                2





                $begingroup$

                A $0$ vector can never be part of a linearly independent list of vectors. In other words, a list of vectors containing a $0$ vector is always linearly dependent.



                Therefore consider a list of vectors $(v_2, v_3, ... , v_n)$ which are linearly independent. Now $(0, v_2, v_3, ... , v_n)$ is linearly dependent, however there exists no $j in {2,...,n}$ such that $v_j in span(0,v_2,...,v_{j-1})$ by the choice of the list $(v_2, v_3, ... , v_n)$.



                I guess you are mostly uncomfortable with the special treatment of $v_1$. The lemma could've been stated as for a list of linearly dependent vectors, there exists at least one vector which belongs to the span of the rest of the vectors and in that case there is no need of the special treatment of the first vector. This is because the $0$ vector can always be expressed as a linear combination of other vectors (a linear combination where all the coefficients are $0 in mathbb{F}$). However there are a lot to be gained by this particular way of stating the lemma as it makes some other proofs really easy and elegant.



                Some books do not pose any restriction on $v_1$ and just states that $exists j$ such that $v_j in span(v_1, ..., v_{j-1})$. Now this would mean if $j=1$, then $v_1 in span()$. In this case they define $span()$ as the empty vector space or ${0}$. So naturally $v_1 in span() implies v_1 = 0$.



                Hope this answers your question.






                share|cite|improve this answer









                $endgroup$



                A $0$ vector can never be part of a linearly independent list of vectors. In other words, a list of vectors containing a $0$ vector is always linearly dependent.



                Therefore consider a list of vectors $(v_2, v_3, ... , v_n)$ which are linearly independent. Now $(0, v_2, v_3, ... , v_n)$ is linearly dependent, however there exists no $j in {2,...,n}$ such that $v_j in span(0,v_2,...,v_{j-1})$ by the choice of the list $(v_2, v_3, ... , v_n)$.



                I guess you are mostly uncomfortable with the special treatment of $v_1$. The lemma could've been stated as for a list of linearly dependent vectors, there exists at least one vector which belongs to the span of the rest of the vectors and in that case there is no need of the special treatment of the first vector. This is because the $0$ vector can always be expressed as a linear combination of other vectors (a linear combination where all the coefficients are $0 in mathbb{F}$). However there are a lot to be gained by this particular way of stating the lemma as it makes some other proofs really easy and elegant.



                Some books do not pose any restriction on $v_1$ and just states that $exists j$ such that $v_j in span(v_1, ..., v_{j-1})$. Now this would mean if $j=1$, then $v_1 in span()$. In this case they define $span()$ as the empty vector space or ${0}$. So naturally $v_1 in span() implies v_1 = 0$.



                Hope this answers your question.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Dec 23 '18 at 4:21









                Chayan GhoshChayan Ghosh

                1699




                1699






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1614552%2flinear-dependence-lemma-non-zero-vector%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Plaza Victoria

                    Puebla de Zaragoza

                    Musa