(Why) can we treat a function of a variable as another independent variable?












6














I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;



$$u'(x) = f(x, u(x))$$



In order to determine the stability of the equation, one may calculate the Jacobian,



$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$



Here is a specific differential equation:



$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$



For which the Jacobian is



$$J(x, u(x)) = -alpha$$



Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.



This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?



Thank you!










share|cite|improve this question
























  • "Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
    – fleablood
    Nov 23 at 22:02






  • 1




    "This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
    – fleablood
    Nov 23 at 22:08










  • @fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
    – FredV
    Nov 23 at 22:27








  • 1




    When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
    – LutzL
    Nov 24 at 15:11


















6














I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;



$$u'(x) = f(x, u(x))$$



In order to determine the stability of the equation, one may calculate the Jacobian,



$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$



Here is a specific differential equation:



$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$



For which the Jacobian is



$$J(x, u(x)) = -alpha$$



Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.



This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?



Thank you!










share|cite|improve this question
























  • "Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
    – fleablood
    Nov 23 at 22:02






  • 1




    "This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
    – fleablood
    Nov 23 at 22:08










  • @fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
    – FredV
    Nov 23 at 22:27








  • 1




    When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
    – LutzL
    Nov 24 at 15:11
















6












6








6







I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;



$$u'(x) = f(x, u(x))$$



In order to determine the stability of the equation, one may calculate the Jacobian,



$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$



Here is a specific differential equation:



$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$



For which the Jacobian is



$$J(x, u(x)) = -alpha$$



Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.



This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?



Thank you!










share|cite|improve this question















I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;



$$u'(x) = f(x, u(x))$$



In order to determine the stability of the equation, one may calculate the Jacobian,



$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$



Here is a specific differential equation:



$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$



For which the Jacobian is



$$J(x, u(x)) = -alpha$$



Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.



This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?



Thank you!







multivariable-calculus derivatives numerical-methods jacobian numerical-calculus






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 24 at 21:41









Ethan Bolker

41k546108




41k546108










asked Nov 23 at 21:36









FredV

876




876












  • "Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
    – fleablood
    Nov 23 at 22:02






  • 1




    "This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
    – fleablood
    Nov 23 at 22:08










  • @fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
    – FredV
    Nov 23 at 22:27








  • 1




    When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
    – LutzL
    Nov 24 at 15:11




















  • "Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
    – fleablood
    Nov 23 at 22:02






  • 1




    "This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
    – fleablood
    Nov 23 at 22:08










  • @fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
    – FredV
    Nov 23 at 22:27








  • 1




    When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
    – LutzL
    Nov 24 at 15:11


















"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02




"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02




1




1




"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08




"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08












@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27






@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27






1




1




When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11






When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11












2 Answers
2






active

oldest

votes


















3














There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.



See: What exactly is the difference between a derivative and a total derivative?






share|cite|improve this answer































    2














    To make things simpler, imagine a very simple autonomous dynamical system



    $$
    frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
    $$



    for some constants $alpha$ and $u_0$. The solutions to this system are of the form



    $$
    u(x) - u_0 = ce^{alpha x} tag{2}
    $$



    The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.



    Now let's make things a bit more general. Imagine a system of the form



    $$
    frac{{rm d}u}{{rm d}x} = f(u) tag{3}
    $$



    and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$



    $$
    f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
    $$



    Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)



    $$
    frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
    $$



    Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)





    EDIT



    Now imagine a system in two dimensions, something like



    $$
    frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
    $$



    You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as



    $$
    frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
    $$



    In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.



    Now to the final part. Instead of an autonomous system, consider a system of the form



    $$
    frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
    $$



    You could rename $v = x$ (that is, create a new state), and note that



    $$
    frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
    $$



    So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)






    share|cite|improve this answer























    • Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
      – FredV
      Nov 23 at 22:30










    • @FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
      – caverac
      Nov 24 at 9:43











    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3010869%2fwhy-can-we-treat-a-function-of-a-variable-as-another-independent-variable%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3














    There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.



    See: What exactly is the difference between a derivative and a total derivative?






    share|cite|improve this answer




























      3














      There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.



      See: What exactly is the difference between a derivative and a total derivative?






      share|cite|improve this answer


























        3












        3








        3






        There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.



        See: What exactly is the difference between a derivative and a total derivative?






        share|cite|improve this answer














        There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.



        See: What exactly is the difference between a derivative and a total derivative?







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Nov 24 at 21:33

























        answered Nov 24 at 15:19









        Dante Grevino

        90618




        90618























            2














            To make things simpler, imagine a very simple autonomous dynamical system



            $$
            frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
            $$



            for some constants $alpha$ and $u_0$. The solutions to this system are of the form



            $$
            u(x) - u_0 = ce^{alpha x} tag{2}
            $$



            The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.



            Now let's make things a bit more general. Imagine a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(u) tag{3}
            $$



            and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$



            $$
            f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
            $$



            Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)



            $$
            frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
            $$



            Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)





            EDIT



            Now imagine a system in two dimensions, something like



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
            $$



            You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as



            $$
            frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
            $$



            In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.



            Now to the final part. Instead of an autonomous system, consider a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
            $$



            You could rename $v = x$ (that is, create a new state), and note that



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
            $$



            So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)






            share|cite|improve this answer























            • Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
              – FredV
              Nov 23 at 22:30










            • @FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
              – caverac
              Nov 24 at 9:43
















            2














            To make things simpler, imagine a very simple autonomous dynamical system



            $$
            frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
            $$



            for some constants $alpha$ and $u_0$. The solutions to this system are of the form



            $$
            u(x) - u_0 = ce^{alpha x} tag{2}
            $$



            The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.



            Now let's make things a bit more general. Imagine a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(u) tag{3}
            $$



            and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$



            $$
            f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
            $$



            Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)



            $$
            frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
            $$



            Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)





            EDIT



            Now imagine a system in two dimensions, something like



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
            $$



            You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as



            $$
            frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
            $$



            In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.



            Now to the final part. Instead of an autonomous system, consider a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
            $$



            You could rename $v = x$ (that is, create a new state), and note that



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
            $$



            So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)






            share|cite|improve this answer























            • Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
              – FredV
              Nov 23 at 22:30










            • @FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
              – caverac
              Nov 24 at 9:43














            2












            2








            2






            To make things simpler, imagine a very simple autonomous dynamical system



            $$
            frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
            $$



            for some constants $alpha$ and $u_0$. The solutions to this system are of the form



            $$
            u(x) - u_0 = ce^{alpha x} tag{2}
            $$



            The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.



            Now let's make things a bit more general. Imagine a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(u) tag{3}
            $$



            and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$



            $$
            f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
            $$



            Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)



            $$
            frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
            $$



            Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)





            EDIT



            Now imagine a system in two dimensions, something like



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
            $$



            You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as



            $$
            frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
            $$



            In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.



            Now to the final part. Instead of an autonomous system, consider a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
            $$



            You could rename $v = x$ (that is, create a new state), and note that



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
            $$



            So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)






            share|cite|improve this answer














            To make things simpler, imagine a very simple autonomous dynamical system



            $$
            frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
            $$



            for some constants $alpha$ and $u_0$. The solutions to this system are of the form



            $$
            u(x) - u_0 = ce^{alpha x} tag{2}
            $$



            The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.



            Now let's make things a bit more general. Imagine a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(u) tag{3}
            $$



            and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$



            $$
            f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
            $$



            Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)



            $$
            frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
            $$



            Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)





            EDIT



            Now imagine a system in two dimensions, something like



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
            $$



            You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as



            $$
            frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
            $$



            In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.



            Now to the final part. Instead of an autonomous system, consider a system of the form



            $$
            frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
            $$



            You could rename $v = x$ (that is, create a new state), and note that



            $$
            frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
            $$



            So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Nov 24 at 9:42

























            answered Nov 23 at 22:07









            caverac

            13.2k21029




            13.2k21029












            • Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
              – FredV
              Nov 23 at 22:30










            • @FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
              – caverac
              Nov 24 at 9:43


















            • Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
              – FredV
              Nov 23 at 22:30










            • @FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
              – caverac
              Nov 24 at 9:43
















            Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
            – FredV
            Nov 23 at 22:30




            Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
            – FredV
            Nov 23 at 22:30












            @FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
            – caverac
            Nov 24 at 9:43




            @FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
            – caverac
            Nov 24 at 9:43


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3010869%2fwhy-can-we-treat-a-function-of-a-variable-as-another-independent-variable%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

            How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...