Connection between Fourier transform and Taylor series











up vote
113
down vote

favorite
104












Both Fourier transform and Taylor series are means to represent functions in a different form.



What is the connection between these two? Is there a way to get from one to the other (and back again)? Is there an overall, connecting (geometric?) intuition?










share|cite|improve this question




















  • 2




    Not answering your question, but a summary of the reasons that the Fourier transform tends to be favoured in engineering applications can be found in the "Motivation" section of the introduction to this paper.
    – yasmar
    Oct 20 '10 at 23:12















up vote
113
down vote

favorite
104












Both Fourier transform and Taylor series are means to represent functions in a different form.



What is the connection between these two? Is there a way to get from one to the other (and back again)? Is there an overall, connecting (geometric?) intuition?










share|cite|improve this question




















  • 2




    Not answering your question, but a summary of the reasons that the Fourier transform tends to be favoured in engineering applications can be found in the "Motivation" section of the introduction to this paper.
    – yasmar
    Oct 20 '10 at 23:12













up vote
113
down vote

favorite
104









up vote
113
down vote

favorite
104






104





Both Fourier transform and Taylor series are means to represent functions in a different form.



What is the connection between these two? Is there a way to get from one to the other (and back again)? Is there an overall, connecting (geometric?) intuition?










share|cite|improve this question















Both Fourier transform and Taylor series are means to represent functions in a different form.



What is the connection between these two? Is there a way to get from one to the other (and back again)? Is there an overall, connecting (geometric?) intuition?







intuition fourier-analysis taylor-expansion integral-transforms






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 2 '17 at 16:38









nbro

2,39353171




2,39353171










asked Oct 20 '10 at 15:29









vonjd

4,15364058




4,15364058








  • 2




    Not answering your question, but a summary of the reasons that the Fourier transform tends to be favoured in engineering applications can be found in the "Motivation" section of the introduction to this paper.
    – yasmar
    Oct 20 '10 at 23:12














  • 2




    Not answering your question, but a summary of the reasons that the Fourier transform tends to be favoured in engineering applications can be found in the "Motivation" section of the introduction to this paper.
    – yasmar
    Oct 20 '10 at 23:12








2




2




Not answering your question, but a summary of the reasons that the Fourier transform tends to be favoured in engineering applications can be found in the "Motivation" section of the introduction to this paper.
– yasmar
Oct 20 '10 at 23:12




Not answering your question, but a summary of the reasons that the Fourier transform tends to be favoured in engineering applications can be found in the "Motivation" section of the introduction to this paper.
– yasmar
Oct 20 '10 at 23:12










6 Answers
6






active

oldest

votes

















up vote
115
down vote



accepted










Assume that the Taylor expansion $f(x)=sum_{k=0}^infty a_k x^k$ is convergent for some $|x|>1$. Then $f$ can be extended in a natural way into the complex domain by writing $f(z)=sum_{k=0}^infty a_k z^k$ with $z$ complex and $|z|≤1$. So we may look at $f$ on the unit circle $|z|=1$. Consider $f$ as a function of the polar angle $phi$ there, i.e., look at the function $F(phi):=f(e^{iphi})$. This function $F$ is $2pi$-periodic, and its Fourier expansion is nothing else but $F(phi)=sum_{k=0}^infty a_k e^{ikphi}$ where the $a_k$ are the Taylor coefficients of the "real" function $xmapsto f(x)$ we started with.






share|cite|improve this answer



















  • 9




    Great answer. But as Qiaochu says, "it's worth mentioning that the Fourier transform is much more general than this" — the Fourier expansion exists even for functions not got in this way (as the restriction to the unit circle of some function whose Taylor series has radius of convergence greater than 1).
    – ShreevatsaR
    Jul 2 '11 at 4:18












  • Why does $|z|$ has to be less than $1$?
    – Ooker
    Sep 25 '17 at 19:19










  • @Ooker: Depending on the particular $f$ we started with its extension is analytic in some disc of radius $rho>1$. For the purposes of this answer we only have to be sure that $f$ behaves nicely for $|z|=1$.
    – Christian Blatter
    Sep 26 '17 at 6:56


















up vote
53
down vote













A holomorphic function in an annulus containing the unit circle has a Laurent series about zero which generalizes the Taylor series of a holomorphic function in a neighborhood of zero. When restricted to the unit circle, this Laurent series gives a Fourier series of the corresponding periodic function. (This explains the connection between the Cauchy integral formula and the integral defining the coefficients of a Fourier series.)



But it's worth mentioning that the Fourier transform is much more general than this and applies in a broad range of contexts. I don't know that there's a short, simple answer to this question.



Edit: I guess it's also worth talking about intuition. One intuition for the Taylor series of a function $f(x)$ at at a point is that its coefficients describe the displacement, velocity, acceleration, jerk, and so forth of a particle which is at location $f(t)$ at time $t$. And one intuition for the Fourier series of a periodic function $f(x)$ is that it describes the decomposition of $f(x)$ into pure tones of various frequencies. In other words, a periodic function is like a chord, and its Fourier series describes the notes in that chord.



(The connection between the two provided by the Cauchy integral formula is therefore quite remarkable; one takes an integral of $f$ over the unit circle and it tells you information about the behavior of $f$ at the origin. But this is more a magic property of holomorphic functions than anything else. One intuition to have here is that a holomorphic function describes, for example, the flow of some ideal fluid, and integrating over the circle gives you information about "sources" and "sinks" of that flow within the circle.)






share|cite|improve this answer



















  • 3




    The "chord" analogy should not be taken at face value, by the way; any note you hear played on a physical instrument is not a pure sine wave but comes with a collection of overtones, and the relative strength of these overtones is what gives different instruments their different sounds. Actual pure sine waves - without overtones - can really only be generated electronically.
    – Qiaochu Yuan
    Oct 21 '10 at 17:05






  • 7




    displacement, velocity, acceleration, jerk, ... or for the non-physics intuition, $mathrm{Taylor} rightsquigarrow$ 0, mean, variance, skewness, kurtosis, ... in statistics. Or $mathrm{Taylor} rightsquigarrow$ height, tilt, curve, wiggle, womp, ... in terms of the graph $f(x)=x$.
    – isomorphismes
    Mar 18 '11 at 22:11




















up vote
25
down vote













There is a big difference between the Taylor series and Fourier transform. The Taylor series is a local approximation, while the Fourier transform uses information over a range of the variable.



The theorem Qiaochu mentions is very important in complex analysis and is one indication of how restrictive having a derivative in the complex plane is on functions.






share|cite|improve this answer






























    up vote
    24
    down vote













    There is an analogy, more direct for fourier series. Both Fourier series and Taylor series are decompositions of a function $f(x)$, which is represented as a linear combination of a (countable) set of functions. The function is then fully specified by a sequence of coefficients, instead of by its values $f(x)$ for each $x$. In this sense, both can be called a transform
    $f(x) leftrightarrow { a_0, a_1, ...}$



    For the Taylor series (around 0, for simplicity), the set of functions is ${1, x , x^2, x^3...}$. For the Fourier series is ${1, sin(omega x), cos(omega x), sin(2 omega x), cos(2 omega x) ...}$.



    Actually the Fourier series is one the many transformations that uses an orthonomal basis of functions. It is shown that, in that case, the coefficients are obtained by "projecting" $f(x)$ onto each basis function, which amounts to an inner product, which (in the real scalar case) amounts to an integral. This implies that the coefficients depends on a global property of the function (over the full "period" of the function).



    The Taylor series (which does not use a orthonormal basis) is conceptually very different, in that the coeffients depends only in local properties of the function, i.e., its behaviour in a neighbourhood (its derivatives).






    share|cite|improve this answer



















    • 1




      you meant ${e^{ikx}, k in mathbb{Z}}$, not the awful $sin , cos$ basis
      – reuns
      May 30 '16 at 3:26




















    up vote
    6
    down vote













    Taylor series at $t=0$ of some function $textrm{f}(t)$ is defined as



    $$ textrm{f}left(tright) =sum_{j=0}^{infty}
    {
    h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
    }
    $$
    where $ h_j=1/{j!}$ and $frac{d^0}{dt^0}textrm{f}left(tright)=textrm{f}left(tright)$
    Fourier series is defined as
    $$
    textrm{f}left(tright) =
    sum_{n=1}^{infty}
    { left(
    a_ncdotcos left({frac{2picdotp n cdotp t}{T}}right)+
    b_ncdotsin left({frac{2picdotp n cdotp t}{T}}right)
    right)
    }
    $$
    with coefficients:
    $$
    begin{align}
    a_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt}\
    b_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpsinleft({frac{2picdotp n cdotp t}{T}}right),dt}
    end{align}
    $$
    For full-wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ for any positive period $T$.
    Let find Taylor series of cosine and sine functions:
    $$
    begin{align}
    cosleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k right)!}
    cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}}\
    sinleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k + 1right)!}
    cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}}
    end{align}
    $$
    and substitute this expansion to Fourier coefficients:
    $$
    begin{align}
    a_n&=frac{2}{T}cdotp
    int_{t_1}^{t_2}
    {
    underbrace
    {
    left(
    sum_{j=0}^{infty}
    {
    h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
    }
    right)cdotp
    left(sum_{k=0}^{infty}
    {
    frac{(-1)^k}{left(2cdotp k right)!}
    cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}
    }right)
    }_{mbox{$textrm{Tc}(t)$}}
    ,dt
    }\
    b_n&=frac{2}{T}cdotp
    int_{t_1}^{t_2}
    {
    underbrace
    {
    left(
    sum_{j=0}^{infty}
    {
    h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
    }
    right)
    cdotp
    left(sum_{k=0}^{infty}
    {frac{(-1)^k}{left(2cdotp k + 1right)!}
    cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}
    }
    right)
    }_{mbox{$textrm{Ts}(t)$}}
    ,dt
    }
    end{align}
    $$
    Now consider $textrm{Tc}(t)$:
    For some first indicies $j$ and $k$ brackets can be disclosured and terms multiplied in sequence as shown:
    $$
    begin{align*}
    textrm{Tc}(t)&=&textrm{f}(0)+frac{d}{dt}textrm{f}(0)cdotp t + left(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotp t^{2}+\&&
    +left(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotp t^{3}+\&&
    +left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotp t^{4}+dots
    end{align*}
    $$
    Now integrate this function for each term:
    $$
    begin{align*}
    int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)left(t_2-t_1right)+frac{1}{2}cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(t_2^2-t_1^2right) + \&& + frac{1}{3}cdotleft(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotpleft(t_2^3-t_1^3right) +\&&
    +frac{1}{4}cdotpleft(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotpleft(t_2^4-t_1^4right)+\&&
    +frac{1}{5}left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotpleft(t_2^5-t_1^5right)+dots
    end{align*}
    $$
    Collect coefficients at previous integral above $frac{d^i}{dt^i}textrm{f}(0)$:
    $$
    begin{align*}
    int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)cdotpleft( left(t_2-t_1right)-
    frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+frac{2}{15}cdotpleft(frac{picdotp n}{T}right)^5cdotpleft(t_2^5-t_1^5right)+dotsright) +
    \&&
    +frac{d}{dt}textrm{f}(0)cdotpleft(frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-
    frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+frac{1}{9}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^6-t_1^6right)+dotsright)+
    \&&
    +frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-
    frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+frac{1}{21}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^7-t_1^7right)+dotsright)+
    \&&
    +frac{d^3}{dt^3}textrm{f}(0)cdotpleft(frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
    frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+frac{1}{72}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^8-t_1^8right)+dotsright)+dots
    end{align*}
    $$



    Now it is easy to recognize sequences at brackets (rhs expression is multiplied by $2/T$).
    For $textrm{f}(0)$:
    $$
    begin{align*}
    left(t_2-t_1right)-
    frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+dots &:& frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdotleft(2cdot i+1 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+1 right) -t_1^ left(2cdot i+1 right)right) }{T^ left(2cdot i+1 right)}
    end{align*}
    $$
    For $frac{d}{dt}textrm{f}(0)$:
    $$
    begin{align*}
    frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot left(2cdot i+2 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+2 right) -t_1^ left(2cdot i+2 right)right) }{T^ left(2cdot i+1 right)}
    end{align*}
    $$
    For $frac{d^2}{dt^2}textrm{f}(0)$:
    $$
    begin{align*}
    frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot2cdot left(2cdot i+3 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+3 right) -t_1^ left(2cdot i+3 right)right) }{T^ left(2cdot i+1 right)}
    end{align*}
    $$
    For $frac{d^3}{dt^3}textrm{f}(0)$:
    $$
    begin{align*}
    frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
    frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+dots&:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot 2cdot 3cdotleft(2cdot i+4 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+4 right) -t_1^ left(2cdot i+4 right)right) }{T^ left(2cdot i+1 right)}
    end{align*}
    $$
    and so on.
    Finally overall sequence for $frac{d^m}{dt^m}$ is computed as:
    $$
    frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}
    $$
    Now we can find sum using CAS:
    $$
    textrm{Ct}(n,m)=sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}
    $$
    and $textrm{Ct}(n,m)$ becomes quite complex expression containing Lommel's or hypergeometric functions.
    In particular, when $m=0$ function becomes
    $$
    textrm{Ct}(n,0)=frac{sinleft(frac{2pi ncdot t_2}{T})right)-sinleft(frac{2pi n cdot t_1}{T}right)}{pi n}
    $$
    for $m=1$:
    $$
    textrm{Ct}(n,1)= frac
    {2pi n
    left(
    sinleft(frac{2pi ncdot t_2}{T})right)cdot t_2-sinleft(frac{2pi ncdot t_1}{T}right)cdot t_1
    right)
    +Tcdot
    left(
    cosleft(frac{2pi ncdot t_2}{T})right)-cosleft(frac{2pi ncdot t_1}{T}right)
    right)
    }
    { 2cdot(pi n)^2 }
    $$



    and so on.



    So we can write expression for $a_n$:
    $$
    a_n=sum_{m=0}^{infty}{frac{d^m}{dt^m}textrm{f}(0)cdottextrm{Ct}(n,m)}
    $$
    or
    $$
    a_n=sum_{m=0}^{infty}{frac{1}{m!}cdotfrac{d^m}{dt^m}textrm{f}(0)cdotleft(sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ left(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}right)}
    $$
    We can easy see that $frac{1}{m!}$ is Taylor's coefficient, thus relationship between the Fourier coefficients and the coefficient in the Taylor expansion using special functions is established.
    In the particular case of full wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ we can write for $textrm{Ct}(n,m)$ more simple closed form:
    $$
    frac{
    left(-1 right)^n T^m
    left(
    left(pi nright)^ left(-m-frac{1}{2} right)left(textrm{I}cdottextrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, -pi n right)-textrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, pi n right)right)+1+left(-1 right)^m right)
    }
    {2^ {m}cdotleft(m+1 right)!}
    $$
    where $textrm{L}_{textrm{S1}}(mu,nu,z)=textrm{s}_{mu,nu}(z)$ is first Lommel function and $textrm{I}=(-1)^{2^{-1}}$ (Complex $a_n$ coefficient).
    For example let consider parabolic signal with period $T$: $textrm{g}(t)=Acdot t^2+Bcdot t + C$.
    Coefficients $a_n$ can be found using Fourier formula:
    $$
    a_n=frac{2}{T}cdotpint_{-T/2}^{T/2}{left(Acdot t^2+Bcdot t + Cright)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt} = Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
    $$
    For $textrm{g}(t)$ function: $textrm{g}'(t)=2Acdot t + B$ , $textrm{g}''(t)=2A$, derivatives greater than two is zero. So we can use $textrm{Ct}(n,m)$ for $m=0,1,2$. It is easy to check if $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ then $textrm{Ct}(n,0) = 0,textrm{Ct}(n,1)=0$ we have zero values for odd and non zero for even values, in particular, whenfor $m=2$:



    $$
    textrm{Ct}(n,2) = frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n
    $$
    for $m=4$ as example $textrm{Ct}(n,4) = frac{1}{48}left(frac{T}{pi n}right)^4cdotleft((pi n)^2-6right)cdot(-1)^n
    $.
    Finally we can obtain for example for $m$ up to 4:



    $$
    begin{align*}
    a_n=sum_{m=0}^{4}{frac{d^m}{dt^m}textrm{g}(0)cdottextrm{Ct}(n,m)}=&\
    =left(Acdot 0^2+Bcdot 0 + Cright)cdot 0+left(2Acdot 0+Bright)cdot 0+frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n cdot 2A+0cdot0+...cdot 0=\
    =Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
    end{align*}
    $$
    the same result using Fourier integral. There is interesting result for non-integer harmonic for $textrm{g}(t)$ is:
    $$
    a_n=frac{1}{120}Acdot T^2 cdot (-1)^{n+1} left({}_{2}textrm{
    F}_1left( 1, 3; frac{7}{2}; -left(frac{pi n}{2}right)^2right)cdot(pi n)^2-20right)
    $$
    where ${}_{2}textrm{
    F}_1left(a,b;c;zright)$ is hypergeometric function.
    So we can plot coefficients calculated from Fourier integral and for this special result (for $T=1,,A=1$):
    Cosine series. Red circles and red line is Fourier cosine coefficients (for real part of non-integer power of -1), solid blue line is real part of obtained expression with hypergeometric function and dash green line is imaginary. For integer $n$ imaginary part is zero and $a_n$ is real.
    Similar expressions can be obtained for the sine $b_n$ series.






    share|cite|improve this answer

















    • 3




      I am afraid I can't see the forest for the trees. Could you please give the general idea what you are trying to achieve with all those transformations and how it answers the question. Thank you.
      – vonjd
      Aug 10 '16 at 6:14






    • 2




      As stated by Qiaochu, in fact, we have a set of numbers for the speed and acceleration, and a set of numbers for the sine and cosine coefficients (harmonics) for process description. In the first case, for example, you can achieve good approximation for free-fall parabolic movement, in the second - if this movement is due to vibrations, such as a pendulum or a spring. The idea is to find a function that directly converts vector of numerical values of speed, acceleration (and higher derivatives) to harmonics (sine and cosine coeffs. or phases/amplitudes): that is Lommel function as shown.
      – Timur Zhoraev
      Aug 10 '16 at 7:35


















    up vote
    5
    down vote













    I think that the missing link that connects the Fourier transform to the Taylor series expansion is Euler's formula, $e^{jmath x}=cos(x) +jmath sin(x)$. This celebrated formula establishes a relationship between trigonometric functions of real entities and exponential functions of complex (i.e. imaginary) entities. In doing so, it establishes a fundamental connection with the Fourier transform, which is essentially trigonometric in nature. In fact, the Fourier transform for a function $f(t)$ is, by definition, a Fourier series of $f(t)$ interpreted as $trightarrow infty$, and a Fourier series is, by definition, a linear summation of sin and cos functions. The Taylor Series comes into play in the derivation of Euler's formula.



    As previously mentioned, Euler's formula states $e^{jmath x}=cos(x) +jmath sin(x)$. Hence, it's acceptable to conceptually superimpose the conventional (x, y) unit circle and the real-complex plane, as they both portray the polar Eulerian expression on the continuous interval from $0$ to $2pi$. We therefore, evaluating Euler's formula at $x=2pi$, arrive at
    $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$
    This expression is intrinsically intertwined with the nature of a Fourier transform because Fourier transforms aim to convert a function from the time domain to the frequency domain. To do so, we decompose periodic functions into simple, linear sums of sin and cos, and allow this new function to approach infinity.



    So, where does the Taylor series fit into all this? We use the Taylor series expansion (or more precisely, the McLauren series expansion) to obtain Euler's formula. Here's the proof (or derivative, depending on your perspective):



    Proposition: For any complex number $z$, where $z=x+jmath y=x+sqrt{-1}*y$, i.e. $x=Re {z } $ and $y=Im {z } $, it can be said that, for real values $x$,



    $$ e^z=e^{jpi} = cos(x)+jmath sin(x)$$



    Lemma 1: The Taylor series expansion of an infinitely differentiable function $f(x)$ in the neighborhood $a$ is defined
    $$f(x)=sum_{k=0}^{infty} frac{f^k(a)}{k!}(x-a)^k$$
    We'll say that we're working in the neighborhood $a$ surrounding $theta$. So, when $theta = 0$, we get a special case of the Taylor series, called the McLauren series. The expansion of the Taylor (or McLauren) series for $sin(x)$ and $cos(x)$ are
    $$ sin (theta)=sum_{k=0}^{infty} frac{d^{k} sin(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = theta - frac{theta^3}{3!} + frac{theta^5}{5!} - frac{theta^7}{7!} ...$$
    $$ cos (theta)=sum_{k=0}^{infty} frac{d^{k} cos(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 - frac{theta^2}{2!} + frac{theta^4}{4!} - frac{theta^6}{6!} ...$$
    This is not a surprise, as we know that sine and cosine are odd and even functions, respectively. We've now accounted for the trigonometric portions of $e^z$, but have not yet addressed the complex component.



    Lemma 2: We apply the previously defined Taylor (more specifically McLauren) series expansion to the function $e^z$ when $z=jmath x$, and we get
    $$e^{jmath x} =sum_{k=0}^{infty} frac{d^{k} e^{jmath x}}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 + jmath x -
    $$



    Comparing the right-hand terms from the Taylor series expansions of $ cos (theta)$, $sin (theta)$, and $e^{jmath theta}$ performed in Lemmas 1 and 2, we see that summing the expansions from Lemma 1, we arrive, by the procedure I'm too lazy to type out in LaTeX but not too lazy to explain via my good friends Google and krotz, at
    $e^{jmath x}=cos(x) +jmath sin(x)$. [Q.E.D.]



    Now that we've proven Euler's formula, we can evaluate it at $x=2pi$ to acquire



    $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$



    Due to the characteristics of the sin and cos functions, it is possible to use simple integration to recover the amplitude of each sin and cos wave represented in a Fourier transform (similar to the reverse of the above proof). In an overwhelming majority of cases, it's highly useful to select Euler's formula as the function to integrate over. Because Euler's formula and the Fourier transform are both (at least, in part) fundamentally trigonometric in nature, the use of Euler's formula greatly simplifies most of the real portions of the Fourier analyses. Additionally, for the complex case, frequencies can be represented inverse-temporally using a combination of Euler's formula and the Fourier series expansion.



    So, it's a bit messy and convoluted (etymologically, not integrally), but it really boils down to the fact that the Taylor (or McLauren) series, the Fourier series and transform, and Euler's formula all relate a trigonometrically
    The differences between the three arise by nature of application. Taylor series are used to represent functions as infinite sums of their derivatives. Fourier series and transforms are used in linear systems &/or differential equations to convert signals or DEs from the time to frequency domain. Euler's formula is used to relate trigonometric and complex exponential (complexponential?!) funcions, and is also a formula that, when evaluated at $x=pi$, yields Euler's identity, $e^{jmath pi}+1=0$, an equation so austerely eloquent and aesthetically arousing that I'd be down to stare at all day.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f7301%2fconnection-between-fourier-transform-and-taylor-series%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      6 Answers
      6






      active

      oldest

      votes








      6 Answers
      6






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      115
      down vote



      accepted










      Assume that the Taylor expansion $f(x)=sum_{k=0}^infty a_k x^k$ is convergent for some $|x|>1$. Then $f$ can be extended in a natural way into the complex domain by writing $f(z)=sum_{k=0}^infty a_k z^k$ with $z$ complex and $|z|≤1$. So we may look at $f$ on the unit circle $|z|=1$. Consider $f$ as a function of the polar angle $phi$ there, i.e., look at the function $F(phi):=f(e^{iphi})$. This function $F$ is $2pi$-periodic, and its Fourier expansion is nothing else but $F(phi)=sum_{k=0}^infty a_k e^{ikphi}$ where the $a_k$ are the Taylor coefficients of the "real" function $xmapsto f(x)$ we started with.






      share|cite|improve this answer



















      • 9




        Great answer. But as Qiaochu says, "it's worth mentioning that the Fourier transform is much more general than this" — the Fourier expansion exists even for functions not got in this way (as the restriction to the unit circle of some function whose Taylor series has radius of convergence greater than 1).
        – ShreevatsaR
        Jul 2 '11 at 4:18












      • Why does $|z|$ has to be less than $1$?
        – Ooker
        Sep 25 '17 at 19:19










      • @Ooker: Depending on the particular $f$ we started with its extension is analytic in some disc of radius $rho>1$. For the purposes of this answer we only have to be sure that $f$ behaves nicely for $|z|=1$.
        – Christian Blatter
        Sep 26 '17 at 6:56















      up vote
      115
      down vote



      accepted










      Assume that the Taylor expansion $f(x)=sum_{k=0}^infty a_k x^k$ is convergent for some $|x|>1$. Then $f$ can be extended in a natural way into the complex domain by writing $f(z)=sum_{k=0}^infty a_k z^k$ with $z$ complex and $|z|≤1$. So we may look at $f$ on the unit circle $|z|=1$. Consider $f$ as a function of the polar angle $phi$ there, i.e., look at the function $F(phi):=f(e^{iphi})$. This function $F$ is $2pi$-periodic, and its Fourier expansion is nothing else but $F(phi)=sum_{k=0}^infty a_k e^{ikphi}$ where the $a_k$ are the Taylor coefficients of the "real" function $xmapsto f(x)$ we started with.






      share|cite|improve this answer



















      • 9




        Great answer. But as Qiaochu says, "it's worth mentioning that the Fourier transform is much more general than this" — the Fourier expansion exists even for functions not got in this way (as the restriction to the unit circle of some function whose Taylor series has radius of convergence greater than 1).
        – ShreevatsaR
        Jul 2 '11 at 4:18












      • Why does $|z|$ has to be less than $1$?
        – Ooker
        Sep 25 '17 at 19:19










      • @Ooker: Depending on the particular $f$ we started with its extension is analytic in some disc of radius $rho>1$. For the purposes of this answer we only have to be sure that $f$ behaves nicely for $|z|=1$.
        – Christian Blatter
        Sep 26 '17 at 6:56













      up vote
      115
      down vote



      accepted







      up vote
      115
      down vote



      accepted






      Assume that the Taylor expansion $f(x)=sum_{k=0}^infty a_k x^k$ is convergent for some $|x|>1$. Then $f$ can be extended in a natural way into the complex domain by writing $f(z)=sum_{k=0}^infty a_k z^k$ with $z$ complex and $|z|≤1$. So we may look at $f$ on the unit circle $|z|=1$. Consider $f$ as a function of the polar angle $phi$ there, i.e., look at the function $F(phi):=f(e^{iphi})$. This function $F$ is $2pi$-periodic, and its Fourier expansion is nothing else but $F(phi)=sum_{k=0}^infty a_k e^{ikphi}$ where the $a_k$ are the Taylor coefficients of the "real" function $xmapsto f(x)$ we started with.






      share|cite|improve this answer














      Assume that the Taylor expansion $f(x)=sum_{k=0}^infty a_k x^k$ is convergent for some $|x|>1$. Then $f$ can be extended in a natural way into the complex domain by writing $f(z)=sum_{k=0}^infty a_k z^k$ with $z$ complex and $|z|≤1$. So we may look at $f$ on the unit circle $|z|=1$. Consider $f$ as a function of the polar angle $phi$ there, i.e., look at the function $F(phi):=f(e^{iphi})$. This function $F$ is $2pi$-periodic, and its Fourier expansion is nothing else but $F(phi)=sum_{k=0}^infty a_k e^{ikphi}$ where the $a_k$ are the Taylor coefficients of the "real" function $xmapsto f(x)$ we started with.







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Mar 19 '11 at 8:49

























      answered Oct 20 '10 at 15:52









      Christian Blatter

      171k7111325




      171k7111325








      • 9




        Great answer. But as Qiaochu says, "it's worth mentioning that the Fourier transform is much more general than this" — the Fourier expansion exists even for functions not got in this way (as the restriction to the unit circle of some function whose Taylor series has radius of convergence greater than 1).
        – ShreevatsaR
        Jul 2 '11 at 4:18












      • Why does $|z|$ has to be less than $1$?
        – Ooker
        Sep 25 '17 at 19:19










      • @Ooker: Depending on the particular $f$ we started with its extension is analytic in some disc of radius $rho>1$. For the purposes of this answer we only have to be sure that $f$ behaves nicely for $|z|=1$.
        – Christian Blatter
        Sep 26 '17 at 6:56














      • 9




        Great answer. But as Qiaochu says, "it's worth mentioning that the Fourier transform is much more general than this" — the Fourier expansion exists even for functions not got in this way (as the restriction to the unit circle of some function whose Taylor series has radius of convergence greater than 1).
        – ShreevatsaR
        Jul 2 '11 at 4:18












      • Why does $|z|$ has to be less than $1$?
        – Ooker
        Sep 25 '17 at 19:19










      • @Ooker: Depending on the particular $f$ we started with its extension is analytic in some disc of radius $rho>1$. For the purposes of this answer we only have to be sure that $f$ behaves nicely for $|z|=1$.
        – Christian Blatter
        Sep 26 '17 at 6:56








      9




      9




      Great answer. But as Qiaochu says, "it's worth mentioning that the Fourier transform is much more general than this" — the Fourier expansion exists even for functions not got in this way (as the restriction to the unit circle of some function whose Taylor series has radius of convergence greater than 1).
      – ShreevatsaR
      Jul 2 '11 at 4:18






      Great answer. But as Qiaochu says, "it's worth mentioning that the Fourier transform is much more general than this" — the Fourier expansion exists even for functions not got in this way (as the restriction to the unit circle of some function whose Taylor series has radius of convergence greater than 1).
      – ShreevatsaR
      Jul 2 '11 at 4:18














      Why does $|z|$ has to be less than $1$?
      – Ooker
      Sep 25 '17 at 19:19




      Why does $|z|$ has to be less than $1$?
      – Ooker
      Sep 25 '17 at 19:19












      @Ooker: Depending on the particular $f$ we started with its extension is analytic in some disc of radius $rho>1$. For the purposes of this answer we only have to be sure that $f$ behaves nicely for $|z|=1$.
      – Christian Blatter
      Sep 26 '17 at 6:56




      @Ooker: Depending on the particular $f$ we started with its extension is analytic in some disc of radius $rho>1$. For the purposes of this answer we only have to be sure that $f$ behaves nicely for $|z|=1$.
      – Christian Blatter
      Sep 26 '17 at 6:56










      up vote
      53
      down vote













      A holomorphic function in an annulus containing the unit circle has a Laurent series about zero which generalizes the Taylor series of a holomorphic function in a neighborhood of zero. When restricted to the unit circle, this Laurent series gives a Fourier series of the corresponding periodic function. (This explains the connection between the Cauchy integral formula and the integral defining the coefficients of a Fourier series.)



      But it's worth mentioning that the Fourier transform is much more general than this and applies in a broad range of contexts. I don't know that there's a short, simple answer to this question.



      Edit: I guess it's also worth talking about intuition. One intuition for the Taylor series of a function $f(x)$ at at a point is that its coefficients describe the displacement, velocity, acceleration, jerk, and so forth of a particle which is at location $f(t)$ at time $t$. And one intuition for the Fourier series of a periodic function $f(x)$ is that it describes the decomposition of $f(x)$ into pure tones of various frequencies. In other words, a periodic function is like a chord, and its Fourier series describes the notes in that chord.



      (The connection between the two provided by the Cauchy integral formula is therefore quite remarkable; one takes an integral of $f$ over the unit circle and it tells you information about the behavior of $f$ at the origin. But this is more a magic property of holomorphic functions than anything else. One intuition to have here is that a holomorphic function describes, for example, the flow of some ideal fluid, and integrating over the circle gives you information about "sources" and "sinks" of that flow within the circle.)






      share|cite|improve this answer



















      • 3




        The "chord" analogy should not be taken at face value, by the way; any note you hear played on a physical instrument is not a pure sine wave but comes with a collection of overtones, and the relative strength of these overtones is what gives different instruments their different sounds. Actual pure sine waves - without overtones - can really only be generated electronically.
        – Qiaochu Yuan
        Oct 21 '10 at 17:05






      • 7




        displacement, velocity, acceleration, jerk, ... or for the non-physics intuition, $mathrm{Taylor} rightsquigarrow$ 0, mean, variance, skewness, kurtosis, ... in statistics. Or $mathrm{Taylor} rightsquigarrow$ height, tilt, curve, wiggle, womp, ... in terms of the graph $f(x)=x$.
        – isomorphismes
        Mar 18 '11 at 22:11

















      up vote
      53
      down vote













      A holomorphic function in an annulus containing the unit circle has a Laurent series about zero which generalizes the Taylor series of a holomorphic function in a neighborhood of zero. When restricted to the unit circle, this Laurent series gives a Fourier series of the corresponding periodic function. (This explains the connection between the Cauchy integral formula and the integral defining the coefficients of a Fourier series.)



      But it's worth mentioning that the Fourier transform is much more general than this and applies in a broad range of contexts. I don't know that there's a short, simple answer to this question.



      Edit: I guess it's also worth talking about intuition. One intuition for the Taylor series of a function $f(x)$ at at a point is that its coefficients describe the displacement, velocity, acceleration, jerk, and so forth of a particle which is at location $f(t)$ at time $t$. And one intuition for the Fourier series of a periodic function $f(x)$ is that it describes the decomposition of $f(x)$ into pure tones of various frequencies. In other words, a periodic function is like a chord, and its Fourier series describes the notes in that chord.



      (The connection between the two provided by the Cauchy integral formula is therefore quite remarkable; one takes an integral of $f$ over the unit circle and it tells you information about the behavior of $f$ at the origin. But this is more a magic property of holomorphic functions than anything else. One intuition to have here is that a holomorphic function describes, for example, the flow of some ideal fluid, and integrating over the circle gives you information about "sources" and "sinks" of that flow within the circle.)






      share|cite|improve this answer



















      • 3




        The "chord" analogy should not be taken at face value, by the way; any note you hear played on a physical instrument is not a pure sine wave but comes with a collection of overtones, and the relative strength of these overtones is what gives different instruments their different sounds. Actual pure sine waves - without overtones - can really only be generated electronically.
        – Qiaochu Yuan
        Oct 21 '10 at 17:05






      • 7




        displacement, velocity, acceleration, jerk, ... or for the non-physics intuition, $mathrm{Taylor} rightsquigarrow$ 0, mean, variance, skewness, kurtosis, ... in statistics. Or $mathrm{Taylor} rightsquigarrow$ height, tilt, curve, wiggle, womp, ... in terms of the graph $f(x)=x$.
        – isomorphismes
        Mar 18 '11 at 22:11















      up vote
      53
      down vote










      up vote
      53
      down vote









      A holomorphic function in an annulus containing the unit circle has a Laurent series about zero which generalizes the Taylor series of a holomorphic function in a neighborhood of zero. When restricted to the unit circle, this Laurent series gives a Fourier series of the corresponding periodic function. (This explains the connection between the Cauchy integral formula and the integral defining the coefficients of a Fourier series.)



      But it's worth mentioning that the Fourier transform is much more general than this and applies in a broad range of contexts. I don't know that there's a short, simple answer to this question.



      Edit: I guess it's also worth talking about intuition. One intuition for the Taylor series of a function $f(x)$ at at a point is that its coefficients describe the displacement, velocity, acceleration, jerk, and so forth of a particle which is at location $f(t)$ at time $t$. And one intuition for the Fourier series of a periodic function $f(x)$ is that it describes the decomposition of $f(x)$ into pure tones of various frequencies. In other words, a periodic function is like a chord, and its Fourier series describes the notes in that chord.



      (The connection between the two provided by the Cauchy integral formula is therefore quite remarkable; one takes an integral of $f$ over the unit circle and it tells you information about the behavior of $f$ at the origin. But this is more a magic property of holomorphic functions than anything else. One intuition to have here is that a holomorphic function describes, for example, the flow of some ideal fluid, and integrating over the circle gives you information about "sources" and "sinks" of that flow within the circle.)






      share|cite|improve this answer














      A holomorphic function in an annulus containing the unit circle has a Laurent series about zero which generalizes the Taylor series of a holomorphic function in a neighborhood of zero. When restricted to the unit circle, this Laurent series gives a Fourier series of the corresponding periodic function. (This explains the connection between the Cauchy integral formula and the integral defining the coefficients of a Fourier series.)



      But it's worth mentioning that the Fourier transform is much more general than this and applies in a broad range of contexts. I don't know that there's a short, simple answer to this question.



      Edit: I guess it's also worth talking about intuition. One intuition for the Taylor series of a function $f(x)$ at at a point is that its coefficients describe the displacement, velocity, acceleration, jerk, and so forth of a particle which is at location $f(t)$ at time $t$. And one intuition for the Fourier series of a periodic function $f(x)$ is that it describes the decomposition of $f(x)$ into pure tones of various frequencies. In other words, a periodic function is like a chord, and its Fourier series describes the notes in that chord.



      (The connection between the two provided by the Cauchy integral formula is therefore quite remarkable; one takes an integral of $f$ over the unit circle and it tells you information about the behavior of $f$ at the origin. But this is more a magic property of holomorphic functions than anything else. One intuition to have here is that a holomorphic function describes, for example, the flow of some ideal fluid, and integrating over the circle gives you information about "sources" and "sinks" of that flow within the circle.)







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Oct 20 '10 at 21:07

























      answered Oct 20 '10 at 15:43









      Qiaochu Yuan

      275k32578915




      275k32578915








      • 3




        The "chord" analogy should not be taken at face value, by the way; any note you hear played on a physical instrument is not a pure sine wave but comes with a collection of overtones, and the relative strength of these overtones is what gives different instruments their different sounds. Actual pure sine waves - without overtones - can really only be generated electronically.
        – Qiaochu Yuan
        Oct 21 '10 at 17:05






      • 7




        displacement, velocity, acceleration, jerk, ... or for the non-physics intuition, $mathrm{Taylor} rightsquigarrow$ 0, mean, variance, skewness, kurtosis, ... in statistics. Or $mathrm{Taylor} rightsquigarrow$ height, tilt, curve, wiggle, womp, ... in terms of the graph $f(x)=x$.
        – isomorphismes
        Mar 18 '11 at 22:11
















      • 3




        The "chord" analogy should not be taken at face value, by the way; any note you hear played on a physical instrument is not a pure sine wave but comes with a collection of overtones, and the relative strength of these overtones is what gives different instruments their different sounds. Actual pure sine waves - without overtones - can really only be generated electronically.
        – Qiaochu Yuan
        Oct 21 '10 at 17:05






      • 7




        displacement, velocity, acceleration, jerk, ... or for the non-physics intuition, $mathrm{Taylor} rightsquigarrow$ 0, mean, variance, skewness, kurtosis, ... in statistics. Or $mathrm{Taylor} rightsquigarrow$ height, tilt, curve, wiggle, womp, ... in terms of the graph $f(x)=x$.
        – isomorphismes
        Mar 18 '11 at 22:11










      3




      3




      The "chord" analogy should not be taken at face value, by the way; any note you hear played on a physical instrument is not a pure sine wave but comes with a collection of overtones, and the relative strength of these overtones is what gives different instruments their different sounds. Actual pure sine waves - without overtones - can really only be generated electronically.
      – Qiaochu Yuan
      Oct 21 '10 at 17:05




      The "chord" analogy should not be taken at face value, by the way; any note you hear played on a physical instrument is not a pure sine wave but comes with a collection of overtones, and the relative strength of these overtones is what gives different instruments their different sounds. Actual pure sine waves - without overtones - can really only be generated electronically.
      – Qiaochu Yuan
      Oct 21 '10 at 17:05




      7




      7




      displacement, velocity, acceleration, jerk, ... or for the non-physics intuition, $mathrm{Taylor} rightsquigarrow$ 0, mean, variance, skewness, kurtosis, ... in statistics. Or $mathrm{Taylor} rightsquigarrow$ height, tilt, curve, wiggle, womp, ... in terms of the graph $f(x)=x$.
      – isomorphismes
      Mar 18 '11 at 22:11






      displacement, velocity, acceleration, jerk, ... or for the non-physics intuition, $mathrm{Taylor} rightsquigarrow$ 0, mean, variance, skewness, kurtosis, ... in statistics. Or $mathrm{Taylor} rightsquigarrow$ height, tilt, curve, wiggle, womp, ... in terms of the graph $f(x)=x$.
      – isomorphismes
      Mar 18 '11 at 22:11












      up vote
      25
      down vote













      There is a big difference between the Taylor series and Fourier transform. The Taylor series is a local approximation, while the Fourier transform uses information over a range of the variable.



      The theorem Qiaochu mentions is very important in complex analysis and is one indication of how restrictive having a derivative in the complex plane is on functions.






      share|cite|improve this answer



























        up vote
        25
        down vote













        There is a big difference between the Taylor series and Fourier transform. The Taylor series is a local approximation, while the Fourier transform uses information over a range of the variable.



        The theorem Qiaochu mentions is very important in complex analysis and is one indication of how restrictive having a derivative in the complex plane is on functions.






        share|cite|improve this answer

























          up vote
          25
          down vote










          up vote
          25
          down vote









          There is a big difference between the Taylor series and Fourier transform. The Taylor series is a local approximation, while the Fourier transform uses information over a range of the variable.



          The theorem Qiaochu mentions is very important in complex analysis and is one indication of how restrictive having a derivative in the complex plane is on functions.






          share|cite|improve this answer














          There is a big difference between the Taylor series and Fourier transform. The Taylor series is a local approximation, while the Fourier transform uses information over a range of the variable.



          The theorem Qiaochu mentions is very important in complex analysis and is one indication of how restrictive having a derivative in the complex plane is on functions.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 2 '17 at 16:44









          nbro

          2,39353171




          2,39353171










          answered Oct 20 '10 at 15:49









          Ross Millikan

          288k23195366




          288k23195366






















              up vote
              24
              down vote













              There is an analogy, more direct for fourier series. Both Fourier series and Taylor series are decompositions of a function $f(x)$, which is represented as a linear combination of a (countable) set of functions. The function is then fully specified by a sequence of coefficients, instead of by its values $f(x)$ for each $x$. In this sense, both can be called a transform
              $f(x) leftrightarrow { a_0, a_1, ...}$



              For the Taylor series (around 0, for simplicity), the set of functions is ${1, x , x^2, x^3...}$. For the Fourier series is ${1, sin(omega x), cos(omega x), sin(2 omega x), cos(2 omega x) ...}$.



              Actually the Fourier series is one the many transformations that uses an orthonomal basis of functions. It is shown that, in that case, the coefficients are obtained by "projecting" $f(x)$ onto each basis function, which amounts to an inner product, which (in the real scalar case) amounts to an integral. This implies that the coefficients depends on a global property of the function (over the full "period" of the function).



              The Taylor series (which does not use a orthonormal basis) is conceptually very different, in that the coeffients depends only in local properties of the function, i.e., its behaviour in a neighbourhood (its derivatives).






              share|cite|improve this answer



















              • 1




                you meant ${e^{ikx}, k in mathbb{Z}}$, not the awful $sin , cos$ basis
                – reuns
                May 30 '16 at 3:26

















              up vote
              24
              down vote













              There is an analogy, more direct for fourier series. Both Fourier series and Taylor series are decompositions of a function $f(x)$, which is represented as a linear combination of a (countable) set of functions. The function is then fully specified by a sequence of coefficients, instead of by its values $f(x)$ for each $x$. In this sense, both can be called a transform
              $f(x) leftrightarrow { a_0, a_1, ...}$



              For the Taylor series (around 0, for simplicity), the set of functions is ${1, x , x^2, x^3...}$. For the Fourier series is ${1, sin(omega x), cos(omega x), sin(2 omega x), cos(2 omega x) ...}$.



              Actually the Fourier series is one the many transformations that uses an orthonomal basis of functions. It is shown that, in that case, the coefficients are obtained by "projecting" $f(x)$ onto each basis function, which amounts to an inner product, which (in the real scalar case) amounts to an integral. This implies that the coefficients depends on a global property of the function (over the full "period" of the function).



              The Taylor series (which does not use a orthonormal basis) is conceptually very different, in that the coeffients depends only in local properties of the function, i.e., its behaviour in a neighbourhood (its derivatives).






              share|cite|improve this answer



















              • 1




                you meant ${e^{ikx}, k in mathbb{Z}}$, not the awful $sin , cos$ basis
                – reuns
                May 30 '16 at 3:26















              up vote
              24
              down vote










              up vote
              24
              down vote









              There is an analogy, more direct for fourier series. Both Fourier series and Taylor series are decompositions of a function $f(x)$, which is represented as a linear combination of a (countable) set of functions. The function is then fully specified by a sequence of coefficients, instead of by its values $f(x)$ for each $x$. In this sense, both can be called a transform
              $f(x) leftrightarrow { a_0, a_1, ...}$



              For the Taylor series (around 0, for simplicity), the set of functions is ${1, x , x^2, x^3...}$. For the Fourier series is ${1, sin(omega x), cos(omega x), sin(2 omega x), cos(2 omega x) ...}$.



              Actually the Fourier series is one the many transformations that uses an orthonomal basis of functions. It is shown that, in that case, the coefficients are obtained by "projecting" $f(x)$ onto each basis function, which amounts to an inner product, which (in the real scalar case) amounts to an integral. This implies that the coefficients depends on a global property of the function (over the full "period" of the function).



              The Taylor series (which does not use a orthonormal basis) is conceptually very different, in that the coeffients depends only in local properties of the function, i.e., its behaviour in a neighbourhood (its derivatives).






              share|cite|improve this answer














              There is an analogy, more direct for fourier series. Both Fourier series and Taylor series are decompositions of a function $f(x)$, which is represented as a linear combination of a (countable) set of functions. The function is then fully specified by a sequence of coefficients, instead of by its values $f(x)$ for each $x$. In this sense, both can be called a transform
              $f(x) leftrightarrow { a_0, a_1, ...}$



              For the Taylor series (around 0, for simplicity), the set of functions is ${1, x , x^2, x^3...}$. For the Fourier series is ${1, sin(omega x), cos(omega x), sin(2 omega x), cos(2 omega x) ...}$.



              Actually the Fourier series is one the many transformations that uses an orthonomal basis of functions. It is shown that, in that case, the coefficients are obtained by "projecting" $f(x)$ onto each basis function, which amounts to an inner product, which (in the real scalar case) amounts to an integral. This implies that the coefficients depends on a global property of the function (over the full "period" of the function).



              The Taylor series (which does not use a orthonormal basis) is conceptually very different, in that the coeffients depends only in local properties of the function, i.e., its behaviour in a neighbourhood (its derivatives).







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Feb 28 '17 at 18:27

























              answered Oct 21 '10 at 2:07









              leonbloy

              39.9k645106




              39.9k645106








              • 1




                you meant ${e^{ikx}, k in mathbb{Z}}$, not the awful $sin , cos$ basis
                – reuns
                May 30 '16 at 3:26
















              • 1




                you meant ${e^{ikx}, k in mathbb{Z}}$, not the awful $sin , cos$ basis
                – reuns
                May 30 '16 at 3:26










              1




              1




              you meant ${e^{ikx}, k in mathbb{Z}}$, not the awful $sin , cos$ basis
              – reuns
              May 30 '16 at 3:26






              you meant ${e^{ikx}, k in mathbb{Z}}$, not the awful $sin , cos$ basis
              – reuns
              May 30 '16 at 3:26












              up vote
              6
              down vote













              Taylor series at $t=0$ of some function $textrm{f}(t)$ is defined as



              $$ textrm{f}left(tright) =sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              $$
              where $ h_j=1/{j!}$ and $frac{d^0}{dt^0}textrm{f}left(tright)=textrm{f}left(tright)$
              Fourier series is defined as
              $$
              textrm{f}left(tright) =
              sum_{n=1}^{infty}
              { left(
              a_ncdotcos left({frac{2picdotp n cdotp t}{T}}right)+
              b_ncdotsin left({frac{2picdotp n cdotp t}{T}}right)
              right)
              }
              $$
              with coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt}\
              b_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpsinleft({frac{2picdotp n cdotp t}{T}}right),dt}
              end{align}
              $$
              For full-wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ for any positive period $T$.
              Let find Taylor series of cosine and sine functions:
              $$
              begin{align}
              cosleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}}\
              sinleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}}
              end{align}
              $$
              and substitute this expansion to Fourier coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)cdotp
              left(sum_{k=0}^{infty}
              {
              frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}
              }right)
              }_{mbox{$textrm{Tc}(t)$}}
              ,dt
              }\
              b_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)
              cdotp
              left(sum_{k=0}^{infty}
              {frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}
              }
              right)
              }_{mbox{$textrm{Ts}(t)$}}
              ,dt
              }
              end{align}
              $$
              Now consider $textrm{Tc}(t)$:
              For some first indicies $j$ and $k$ brackets can be disclosured and terms multiplied in sequence as shown:
              $$
              begin{align*}
              textrm{Tc}(t)&=&textrm{f}(0)+frac{d}{dt}textrm{f}(0)cdotp t + left(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotp t^{2}+\&&
              +left(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotp t^{3}+\&&
              +left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotp t^{4}+dots
              end{align*}
              $$
              Now integrate this function for each term:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)left(t_2-t_1right)+frac{1}{2}cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(t_2^2-t_1^2right) + \&& + frac{1}{3}cdotleft(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotpleft(t_2^3-t_1^3right) +\&&
              +frac{1}{4}cdotpleft(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotpleft(t_2^4-t_1^4right)+\&&
              +frac{1}{5}left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotpleft(t_2^5-t_1^5right)+dots
              end{align*}
              $$
              Collect coefficients at previous integral above $frac{d^i}{dt^i}textrm{f}(0)$:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)cdotpleft( left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+frac{2}{15}cdotpleft(frac{picdotp n}{T}right)^5cdotpleft(t_2^5-t_1^5right)+dotsright) +
              \&&
              +frac{d}{dt}textrm{f}(0)cdotpleft(frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-
              frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+frac{1}{9}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^6-t_1^6right)+dotsright)+
              \&&
              +frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-
              frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+frac{1}{21}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^7-t_1^7right)+dotsright)+
              \&&
              +frac{d^3}{dt^3}textrm{f}(0)cdotpleft(frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+frac{1}{72}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^8-t_1^8right)+dotsright)+dots
              end{align*}
              $$



              Now it is easy to recognize sequences at brackets (rhs expression is multiplied by $2/T$).
              For $textrm{f}(0)$:
              $$
              begin{align*}
              left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+dots &:& frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdotleft(2cdot i+1 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+1 right) -t_1^ left(2cdot i+1 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d}{dt}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot left(2cdot i+2 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+2 right) -t_1^ left(2cdot i+2 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^2}{dt^2}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot2cdot left(2cdot i+3 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+3 right) -t_1^ left(2cdot i+3 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^3}{dt^3}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+dots&:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot 2cdot 3cdotleft(2cdot i+4 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+4 right) -t_1^ left(2cdot i+4 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              and so on.
              Finally overall sequence for $frac{d^m}{dt^m}$ is computed as:
              $$
              frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}
              $$
              Now we can find sum using CAS:
              $$
              textrm{Ct}(n,m)=sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}
              $$
              and $textrm{Ct}(n,m)$ becomes quite complex expression containing Lommel's or hypergeometric functions.
              In particular, when $m=0$ function becomes
              $$
              textrm{Ct}(n,0)=frac{sinleft(frac{2pi ncdot t_2}{T})right)-sinleft(frac{2pi n cdot t_1}{T}right)}{pi n}
              $$
              for $m=1$:
              $$
              textrm{Ct}(n,1)= frac
              {2pi n
              left(
              sinleft(frac{2pi ncdot t_2}{T})right)cdot t_2-sinleft(frac{2pi ncdot t_1}{T}right)cdot t_1
              right)
              +Tcdot
              left(
              cosleft(frac{2pi ncdot t_2}{T})right)-cosleft(frac{2pi ncdot t_1}{T}right)
              right)
              }
              { 2cdot(pi n)^2 }
              $$



              and so on.



              So we can write expression for $a_n$:
              $$
              a_n=sum_{m=0}^{infty}{frac{d^m}{dt^m}textrm{f}(0)cdottextrm{Ct}(n,m)}
              $$
              or
              $$
              a_n=sum_{m=0}^{infty}{frac{1}{m!}cdotfrac{d^m}{dt^m}textrm{f}(0)cdotleft(sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ left(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}right)}
              $$
              We can easy see that $frac{1}{m!}$ is Taylor's coefficient, thus relationship between the Fourier coefficients and the coefficient in the Taylor expansion using special functions is established.
              In the particular case of full wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ we can write for $textrm{Ct}(n,m)$ more simple closed form:
              $$
              frac{
              left(-1 right)^n T^m
              left(
              left(pi nright)^ left(-m-frac{1}{2} right)left(textrm{I}cdottextrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, -pi n right)-textrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, pi n right)right)+1+left(-1 right)^m right)
              }
              {2^ {m}cdotleft(m+1 right)!}
              $$
              where $textrm{L}_{textrm{S1}}(mu,nu,z)=textrm{s}_{mu,nu}(z)$ is first Lommel function and $textrm{I}=(-1)^{2^{-1}}$ (Complex $a_n$ coefficient).
              For example let consider parabolic signal with period $T$: $textrm{g}(t)=Acdot t^2+Bcdot t + C$.
              Coefficients $a_n$ can be found using Fourier formula:
              $$
              a_n=frac{2}{T}cdotpint_{-T/2}^{T/2}{left(Acdot t^2+Bcdot t + Cright)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt} = Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              For $textrm{g}(t)$ function: $textrm{g}'(t)=2Acdot t + B$ , $textrm{g}''(t)=2A$, derivatives greater than two is zero. So we can use $textrm{Ct}(n,m)$ for $m=0,1,2$. It is easy to check if $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ then $textrm{Ct}(n,0) = 0,textrm{Ct}(n,1)=0$ we have zero values for odd and non zero for even values, in particular, whenfor $m=2$:



              $$
              textrm{Ct}(n,2) = frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              for $m=4$ as example $textrm{Ct}(n,4) = frac{1}{48}left(frac{T}{pi n}right)^4cdotleft((pi n)^2-6right)cdot(-1)^n
              $.
              Finally we can obtain for example for $m$ up to 4:



              $$
              begin{align*}
              a_n=sum_{m=0}^{4}{frac{d^m}{dt^m}textrm{g}(0)cdottextrm{Ct}(n,m)}=&\
              =left(Acdot 0^2+Bcdot 0 + Cright)cdot 0+left(2Acdot 0+Bright)cdot 0+frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n cdot 2A+0cdot0+...cdot 0=\
              =Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              end{align*}
              $$
              the same result using Fourier integral. There is interesting result for non-integer harmonic for $textrm{g}(t)$ is:
              $$
              a_n=frac{1}{120}Acdot T^2 cdot (-1)^{n+1} left({}_{2}textrm{
              F}_1left( 1, 3; frac{7}{2}; -left(frac{pi n}{2}right)^2right)cdot(pi n)^2-20right)
              $$
              where ${}_{2}textrm{
              F}_1left(a,b;c;zright)$ is hypergeometric function.
              So we can plot coefficients calculated from Fourier integral and for this special result (for $T=1,,A=1$):
              Cosine series. Red circles and red line is Fourier cosine coefficients (for real part of non-integer power of -1), solid blue line is real part of obtained expression with hypergeometric function and dash green line is imaginary. For integer $n$ imaginary part is zero and $a_n$ is real.
              Similar expressions can be obtained for the sine $b_n$ series.






              share|cite|improve this answer

















              • 3




                I am afraid I can't see the forest for the trees. Could you please give the general idea what you are trying to achieve with all those transformations and how it answers the question. Thank you.
                – vonjd
                Aug 10 '16 at 6:14






              • 2




                As stated by Qiaochu, in fact, we have a set of numbers for the speed and acceleration, and a set of numbers for the sine and cosine coefficients (harmonics) for process description. In the first case, for example, you can achieve good approximation for free-fall parabolic movement, in the second - if this movement is due to vibrations, such as a pendulum or a spring. The idea is to find a function that directly converts vector of numerical values of speed, acceleration (and higher derivatives) to harmonics (sine and cosine coeffs. or phases/amplitudes): that is Lommel function as shown.
                – Timur Zhoraev
                Aug 10 '16 at 7:35















              up vote
              6
              down vote













              Taylor series at $t=0$ of some function $textrm{f}(t)$ is defined as



              $$ textrm{f}left(tright) =sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              $$
              where $ h_j=1/{j!}$ and $frac{d^0}{dt^0}textrm{f}left(tright)=textrm{f}left(tright)$
              Fourier series is defined as
              $$
              textrm{f}left(tright) =
              sum_{n=1}^{infty}
              { left(
              a_ncdotcos left({frac{2picdotp n cdotp t}{T}}right)+
              b_ncdotsin left({frac{2picdotp n cdotp t}{T}}right)
              right)
              }
              $$
              with coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt}\
              b_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpsinleft({frac{2picdotp n cdotp t}{T}}right),dt}
              end{align}
              $$
              For full-wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ for any positive period $T$.
              Let find Taylor series of cosine and sine functions:
              $$
              begin{align}
              cosleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}}\
              sinleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}}
              end{align}
              $$
              and substitute this expansion to Fourier coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)cdotp
              left(sum_{k=0}^{infty}
              {
              frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}
              }right)
              }_{mbox{$textrm{Tc}(t)$}}
              ,dt
              }\
              b_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)
              cdotp
              left(sum_{k=0}^{infty}
              {frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}
              }
              right)
              }_{mbox{$textrm{Ts}(t)$}}
              ,dt
              }
              end{align}
              $$
              Now consider $textrm{Tc}(t)$:
              For some first indicies $j$ and $k$ brackets can be disclosured and terms multiplied in sequence as shown:
              $$
              begin{align*}
              textrm{Tc}(t)&=&textrm{f}(0)+frac{d}{dt}textrm{f}(0)cdotp t + left(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotp t^{2}+\&&
              +left(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotp t^{3}+\&&
              +left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotp t^{4}+dots
              end{align*}
              $$
              Now integrate this function for each term:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)left(t_2-t_1right)+frac{1}{2}cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(t_2^2-t_1^2right) + \&& + frac{1}{3}cdotleft(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotpleft(t_2^3-t_1^3right) +\&&
              +frac{1}{4}cdotpleft(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotpleft(t_2^4-t_1^4right)+\&&
              +frac{1}{5}left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotpleft(t_2^5-t_1^5right)+dots
              end{align*}
              $$
              Collect coefficients at previous integral above $frac{d^i}{dt^i}textrm{f}(0)$:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)cdotpleft( left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+frac{2}{15}cdotpleft(frac{picdotp n}{T}right)^5cdotpleft(t_2^5-t_1^5right)+dotsright) +
              \&&
              +frac{d}{dt}textrm{f}(0)cdotpleft(frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-
              frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+frac{1}{9}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^6-t_1^6right)+dotsright)+
              \&&
              +frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-
              frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+frac{1}{21}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^7-t_1^7right)+dotsright)+
              \&&
              +frac{d^3}{dt^3}textrm{f}(0)cdotpleft(frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+frac{1}{72}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^8-t_1^8right)+dotsright)+dots
              end{align*}
              $$



              Now it is easy to recognize sequences at brackets (rhs expression is multiplied by $2/T$).
              For $textrm{f}(0)$:
              $$
              begin{align*}
              left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+dots &:& frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdotleft(2cdot i+1 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+1 right) -t_1^ left(2cdot i+1 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d}{dt}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot left(2cdot i+2 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+2 right) -t_1^ left(2cdot i+2 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^2}{dt^2}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot2cdot left(2cdot i+3 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+3 right) -t_1^ left(2cdot i+3 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^3}{dt^3}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+dots&:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot 2cdot 3cdotleft(2cdot i+4 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+4 right) -t_1^ left(2cdot i+4 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              and so on.
              Finally overall sequence for $frac{d^m}{dt^m}$ is computed as:
              $$
              frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}
              $$
              Now we can find sum using CAS:
              $$
              textrm{Ct}(n,m)=sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}
              $$
              and $textrm{Ct}(n,m)$ becomes quite complex expression containing Lommel's or hypergeometric functions.
              In particular, when $m=0$ function becomes
              $$
              textrm{Ct}(n,0)=frac{sinleft(frac{2pi ncdot t_2}{T})right)-sinleft(frac{2pi n cdot t_1}{T}right)}{pi n}
              $$
              for $m=1$:
              $$
              textrm{Ct}(n,1)= frac
              {2pi n
              left(
              sinleft(frac{2pi ncdot t_2}{T})right)cdot t_2-sinleft(frac{2pi ncdot t_1}{T}right)cdot t_1
              right)
              +Tcdot
              left(
              cosleft(frac{2pi ncdot t_2}{T})right)-cosleft(frac{2pi ncdot t_1}{T}right)
              right)
              }
              { 2cdot(pi n)^2 }
              $$



              and so on.



              So we can write expression for $a_n$:
              $$
              a_n=sum_{m=0}^{infty}{frac{d^m}{dt^m}textrm{f}(0)cdottextrm{Ct}(n,m)}
              $$
              or
              $$
              a_n=sum_{m=0}^{infty}{frac{1}{m!}cdotfrac{d^m}{dt^m}textrm{f}(0)cdotleft(sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ left(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}right)}
              $$
              We can easy see that $frac{1}{m!}$ is Taylor's coefficient, thus relationship between the Fourier coefficients and the coefficient in the Taylor expansion using special functions is established.
              In the particular case of full wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ we can write for $textrm{Ct}(n,m)$ more simple closed form:
              $$
              frac{
              left(-1 right)^n T^m
              left(
              left(pi nright)^ left(-m-frac{1}{2} right)left(textrm{I}cdottextrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, -pi n right)-textrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, pi n right)right)+1+left(-1 right)^m right)
              }
              {2^ {m}cdotleft(m+1 right)!}
              $$
              where $textrm{L}_{textrm{S1}}(mu,nu,z)=textrm{s}_{mu,nu}(z)$ is first Lommel function and $textrm{I}=(-1)^{2^{-1}}$ (Complex $a_n$ coefficient).
              For example let consider parabolic signal with period $T$: $textrm{g}(t)=Acdot t^2+Bcdot t + C$.
              Coefficients $a_n$ can be found using Fourier formula:
              $$
              a_n=frac{2}{T}cdotpint_{-T/2}^{T/2}{left(Acdot t^2+Bcdot t + Cright)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt} = Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              For $textrm{g}(t)$ function: $textrm{g}'(t)=2Acdot t + B$ , $textrm{g}''(t)=2A$, derivatives greater than two is zero. So we can use $textrm{Ct}(n,m)$ for $m=0,1,2$. It is easy to check if $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ then $textrm{Ct}(n,0) = 0,textrm{Ct}(n,1)=0$ we have zero values for odd and non zero for even values, in particular, whenfor $m=2$:



              $$
              textrm{Ct}(n,2) = frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              for $m=4$ as example $textrm{Ct}(n,4) = frac{1}{48}left(frac{T}{pi n}right)^4cdotleft((pi n)^2-6right)cdot(-1)^n
              $.
              Finally we can obtain for example for $m$ up to 4:



              $$
              begin{align*}
              a_n=sum_{m=0}^{4}{frac{d^m}{dt^m}textrm{g}(0)cdottextrm{Ct}(n,m)}=&\
              =left(Acdot 0^2+Bcdot 0 + Cright)cdot 0+left(2Acdot 0+Bright)cdot 0+frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n cdot 2A+0cdot0+...cdot 0=\
              =Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              end{align*}
              $$
              the same result using Fourier integral. There is interesting result for non-integer harmonic for $textrm{g}(t)$ is:
              $$
              a_n=frac{1}{120}Acdot T^2 cdot (-1)^{n+1} left({}_{2}textrm{
              F}_1left( 1, 3; frac{7}{2}; -left(frac{pi n}{2}right)^2right)cdot(pi n)^2-20right)
              $$
              where ${}_{2}textrm{
              F}_1left(a,b;c;zright)$ is hypergeometric function.
              So we can plot coefficients calculated from Fourier integral and for this special result (for $T=1,,A=1$):
              Cosine series. Red circles and red line is Fourier cosine coefficients (for real part of non-integer power of -1), solid blue line is real part of obtained expression with hypergeometric function and dash green line is imaginary. For integer $n$ imaginary part is zero and $a_n$ is real.
              Similar expressions can be obtained for the sine $b_n$ series.






              share|cite|improve this answer

















              • 3




                I am afraid I can't see the forest for the trees. Could you please give the general idea what you are trying to achieve with all those transformations and how it answers the question. Thank you.
                – vonjd
                Aug 10 '16 at 6:14






              • 2




                As stated by Qiaochu, in fact, we have a set of numbers for the speed and acceleration, and a set of numbers for the sine and cosine coefficients (harmonics) for process description. In the first case, for example, you can achieve good approximation for free-fall parabolic movement, in the second - if this movement is due to vibrations, such as a pendulum or a spring. The idea is to find a function that directly converts vector of numerical values of speed, acceleration (and higher derivatives) to harmonics (sine and cosine coeffs. or phases/amplitudes): that is Lommel function as shown.
                – Timur Zhoraev
                Aug 10 '16 at 7:35













              up vote
              6
              down vote










              up vote
              6
              down vote









              Taylor series at $t=0$ of some function $textrm{f}(t)$ is defined as



              $$ textrm{f}left(tright) =sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              $$
              where $ h_j=1/{j!}$ and $frac{d^0}{dt^0}textrm{f}left(tright)=textrm{f}left(tright)$
              Fourier series is defined as
              $$
              textrm{f}left(tright) =
              sum_{n=1}^{infty}
              { left(
              a_ncdotcos left({frac{2picdotp n cdotp t}{T}}right)+
              b_ncdotsin left({frac{2picdotp n cdotp t}{T}}right)
              right)
              }
              $$
              with coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt}\
              b_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpsinleft({frac{2picdotp n cdotp t}{T}}right),dt}
              end{align}
              $$
              For full-wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ for any positive period $T$.
              Let find Taylor series of cosine and sine functions:
              $$
              begin{align}
              cosleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}}\
              sinleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}}
              end{align}
              $$
              and substitute this expansion to Fourier coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)cdotp
              left(sum_{k=0}^{infty}
              {
              frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}
              }right)
              }_{mbox{$textrm{Tc}(t)$}}
              ,dt
              }\
              b_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)
              cdotp
              left(sum_{k=0}^{infty}
              {frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}
              }
              right)
              }_{mbox{$textrm{Ts}(t)$}}
              ,dt
              }
              end{align}
              $$
              Now consider $textrm{Tc}(t)$:
              For some first indicies $j$ and $k$ brackets can be disclosured and terms multiplied in sequence as shown:
              $$
              begin{align*}
              textrm{Tc}(t)&=&textrm{f}(0)+frac{d}{dt}textrm{f}(0)cdotp t + left(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotp t^{2}+\&&
              +left(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotp t^{3}+\&&
              +left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotp t^{4}+dots
              end{align*}
              $$
              Now integrate this function for each term:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)left(t_2-t_1right)+frac{1}{2}cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(t_2^2-t_1^2right) + \&& + frac{1}{3}cdotleft(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotpleft(t_2^3-t_1^3right) +\&&
              +frac{1}{4}cdotpleft(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotpleft(t_2^4-t_1^4right)+\&&
              +frac{1}{5}left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotpleft(t_2^5-t_1^5right)+dots
              end{align*}
              $$
              Collect coefficients at previous integral above $frac{d^i}{dt^i}textrm{f}(0)$:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)cdotpleft( left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+frac{2}{15}cdotpleft(frac{picdotp n}{T}right)^5cdotpleft(t_2^5-t_1^5right)+dotsright) +
              \&&
              +frac{d}{dt}textrm{f}(0)cdotpleft(frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-
              frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+frac{1}{9}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^6-t_1^6right)+dotsright)+
              \&&
              +frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-
              frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+frac{1}{21}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^7-t_1^7right)+dotsright)+
              \&&
              +frac{d^3}{dt^3}textrm{f}(0)cdotpleft(frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+frac{1}{72}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^8-t_1^8right)+dotsright)+dots
              end{align*}
              $$



              Now it is easy to recognize sequences at brackets (rhs expression is multiplied by $2/T$).
              For $textrm{f}(0)$:
              $$
              begin{align*}
              left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+dots &:& frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdotleft(2cdot i+1 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+1 right) -t_1^ left(2cdot i+1 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d}{dt}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot left(2cdot i+2 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+2 right) -t_1^ left(2cdot i+2 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^2}{dt^2}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot2cdot left(2cdot i+3 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+3 right) -t_1^ left(2cdot i+3 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^3}{dt^3}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+dots&:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot 2cdot 3cdotleft(2cdot i+4 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+4 right) -t_1^ left(2cdot i+4 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              and so on.
              Finally overall sequence for $frac{d^m}{dt^m}$ is computed as:
              $$
              frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}
              $$
              Now we can find sum using CAS:
              $$
              textrm{Ct}(n,m)=sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}
              $$
              and $textrm{Ct}(n,m)$ becomes quite complex expression containing Lommel's or hypergeometric functions.
              In particular, when $m=0$ function becomes
              $$
              textrm{Ct}(n,0)=frac{sinleft(frac{2pi ncdot t_2}{T})right)-sinleft(frac{2pi n cdot t_1}{T}right)}{pi n}
              $$
              for $m=1$:
              $$
              textrm{Ct}(n,1)= frac
              {2pi n
              left(
              sinleft(frac{2pi ncdot t_2}{T})right)cdot t_2-sinleft(frac{2pi ncdot t_1}{T}right)cdot t_1
              right)
              +Tcdot
              left(
              cosleft(frac{2pi ncdot t_2}{T})right)-cosleft(frac{2pi ncdot t_1}{T}right)
              right)
              }
              { 2cdot(pi n)^2 }
              $$



              and so on.



              So we can write expression for $a_n$:
              $$
              a_n=sum_{m=0}^{infty}{frac{d^m}{dt^m}textrm{f}(0)cdottextrm{Ct}(n,m)}
              $$
              or
              $$
              a_n=sum_{m=0}^{infty}{frac{1}{m!}cdotfrac{d^m}{dt^m}textrm{f}(0)cdotleft(sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ left(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}right)}
              $$
              We can easy see that $frac{1}{m!}$ is Taylor's coefficient, thus relationship between the Fourier coefficients and the coefficient in the Taylor expansion using special functions is established.
              In the particular case of full wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ we can write for $textrm{Ct}(n,m)$ more simple closed form:
              $$
              frac{
              left(-1 right)^n T^m
              left(
              left(pi nright)^ left(-m-frac{1}{2} right)left(textrm{I}cdottextrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, -pi n right)-textrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, pi n right)right)+1+left(-1 right)^m right)
              }
              {2^ {m}cdotleft(m+1 right)!}
              $$
              where $textrm{L}_{textrm{S1}}(mu,nu,z)=textrm{s}_{mu,nu}(z)$ is first Lommel function and $textrm{I}=(-1)^{2^{-1}}$ (Complex $a_n$ coefficient).
              For example let consider parabolic signal with period $T$: $textrm{g}(t)=Acdot t^2+Bcdot t + C$.
              Coefficients $a_n$ can be found using Fourier formula:
              $$
              a_n=frac{2}{T}cdotpint_{-T/2}^{T/2}{left(Acdot t^2+Bcdot t + Cright)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt} = Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              For $textrm{g}(t)$ function: $textrm{g}'(t)=2Acdot t + B$ , $textrm{g}''(t)=2A$, derivatives greater than two is zero. So we can use $textrm{Ct}(n,m)$ for $m=0,1,2$. It is easy to check if $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ then $textrm{Ct}(n,0) = 0,textrm{Ct}(n,1)=0$ we have zero values for odd and non zero for even values, in particular, whenfor $m=2$:



              $$
              textrm{Ct}(n,2) = frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              for $m=4$ as example $textrm{Ct}(n,4) = frac{1}{48}left(frac{T}{pi n}right)^4cdotleft((pi n)^2-6right)cdot(-1)^n
              $.
              Finally we can obtain for example for $m$ up to 4:



              $$
              begin{align*}
              a_n=sum_{m=0}^{4}{frac{d^m}{dt^m}textrm{g}(0)cdottextrm{Ct}(n,m)}=&\
              =left(Acdot 0^2+Bcdot 0 + Cright)cdot 0+left(2Acdot 0+Bright)cdot 0+frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n cdot 2A+0cdot0+...cdot 0=\
              =Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              end{align*}
              $$
              the same result using Fourier integral. There is interesting result for non-integer harmonic for $textrm{g}(t)$ is:
              $$
              a_n=frac{1}{120}Acdot T^2 cdot (-1)^{n+1} left({}_{2}textrm{
              F}_1left( 1, 3; frac{7}{2}; -left(frac{pi n}{2}right)^2right)cdot(pi n)^2-20right)
              $$
              where ${}_{2}textrm{
              F}_1left(a,b;c;zright)$ is hypergeometric function.
              So we can plot coefficients calculated from Fourier integral and for this special result (for $T=1,,A=1$):
              Cosine series. Red circles and red line is Fourier cosine coefficients (for real part of non-integer power of -1), solid blue line is real part of obtained expression with hypergeometric function and dash green line is imaginary. For integer $n$ imaginary part is zero and $a_n$ is real.
              Similar expressions can be obtained for the sine $b_n$ series.






              share|cite|improve this answer












              Taylor series at $t=0$ of some function $textrm{f}(t)$ is defined as



              $$ textrm{f}left(tright) =sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              $$
              where $ h_j=1/{j!}$ and $frac{d^0}{dt^0}textrm{f}left(tright)=textrm{f}left(tright)$
              Fourier series is defined as
              $$
              textrm{f}left(tright) =
              sum_{n=1}^{infty}
              { left(
              a_ncdotcos left({frac{2picdotp n cdotp t}{T}}right)+
              b_ncdotsin left({frac{2picdotp n cdotp t}{T}}right)
              right)
              }
              $$
              with coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt}\
              b_n&=frac{2}{T}cdotpint_{t_1}^{t_2}{textrm{f}(t)cdotpsinleft({frac{2picdotp n cdotp t}{T}}right),dt}
              end{align}
              $$
              For full-wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ for any positive period $T$.
              Let find Taylor series of cosine and sine functions:
              $$
              begin{align}
              cosleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}}\
              sinleft({frac{2picdotp n cdotp t}{T}}right)&=sum_{k=0}^{infty}{frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}}
              end{align}
              $$
              and substitute this expansion to Fourier coefficients:
              $$
              begin{align}
              a_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)cdotp
              left(sum_{k=0}^{infty}
              {
              frac{(-1)^k}{left(2cdotp k right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{2cdotp k}
              }right)
              }_{mbox{$textrm{Tc}(t)$}}
              ,dt
              }\
              b_n&=frac{2}{T}cdotp
              int_{t_1}^{t_2}
              {
              underbrace
              {
              left(
              sum_{j=0}^{infty}
              {
              h_jcdotpfrac{d^{j}}{dt^{j}}textrm{f}(0)cdotp t^{j}
              }
              right)
              cdotp
              left(sum_{k=0}^{infty}
              {frac{(-1)^k}{left(2cdotp k + 1right)!}
              cdotp left({frac{2picdotp n cdotp t}{T}}right)^{left(2cdotp k +1right)}
              }
              right)
              }_{mbox{$textrm{Ts}(t)$}}
              ,dt
              }
              end{align}
              $$
              Now consider $textrm{Tc}(t)$:
              For some first indicies $j$ and $k$ brackets can be disclosured and terms multiplied in sequence as shown:
              $$
              begin{align*}
              textrm{Tc}(t)&=&textrm{f}(0)+frac{d}{dt}textrm{f}(0)cdotp t + left(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotp t^{2}+\&&
              +left(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotp t^{3}+\&&
              +left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotp t^{4}+dots
              end{align*}
              $$
              Now integrate this function for each term:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)left(t_2-t_1right)+frac{1}{2}cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(t_2^2-t_1^2right) + \&& + frac{1}{3}cdotleft(-2cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{2}cdotpfrac{d^2}{dt^2}textrm{f}(0)right)cdotpleft(t_2^3-t_1^3right) +\&&
              +frac{1}{4}cdotpleft(-2cdotpfrac{d}{dt}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{6}cdotpfrac{d^3}{dt^3}textrm{f}(0)right)cdotpleft(t_2^4-t_1^4right)+\&&
              +frac{1}{5}left(frac{2}{3}cdotptextrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{4}-frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{picdotp n}{T}right)^{2}+frac{1}{24}cdotpfrac{d^4}{dt^4}textrm{f}(0)right)cdotpleft(t_2^5-t_1^5right)+dots
              end{align*}
              $$
              Collect coefficients at previous integral above $frac{d^i}{dt^i}textrm{f}(0)$:
              $$
              begin{align*}
              int_{t_1}^{t_2}{textrm{Tc}(t),dt}&=&textrm{f}(0)cdotpleft( left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+frac{2}{15}cdotpleft(frac{picdotp n}{T}right)^5cdotpleft(t_2^5-t_1^5right)+dotsright) +
              \&&
              +frac{d}{dt}textrm{f}(0)cdotpleft(frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-
              frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+frac{1}{9}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^6-t_1^6right)+dotsright)+
              \&&
              +frac{d^2}{dt^2}textrm{f}(0)cdotpleft(frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-
              frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+frac{1}{21}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^7-t_1^7right)+dotsright)+
              \&&
              +frac{d^3}{dt^3}textrm{f}(0)cdotpleft(frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+frac{1}{72}cdotpleft(frac{picdotp n}{T}right)^4cdotpleft(t_2^8-t_1^8right)+dotsright)+dots
              end{align*}
              $$



              Now it is easy to recognize sequences at brackets (rhs expression is multiplied by $2/T$).
              For $textrm{f}(0)$:
              $$
              begin{align*}
              left(t_2-t_1right)-
              frac{2}{3}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^3-t_1^3right)+dots &:& frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdotleft(2cdot i+1 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+1 right) -t_1^ left(2cdot i+1 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d}{dt}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{2}cdotpfrac{picdotp n}{T}cdotp left(t_2^2-t_1^2right)-frac{1}{2}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^4-t_1^4right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot left(2cdot i+2 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+2 right) -t_1^ left(2cdot i+2 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^2}{dt^2}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{6}cdotpfrac{picdotp n}{T}cdotp left(t_2^3-t_1^3right)-frac{1}{5}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^5-t_1^5right)+dots &:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot2cdot left(2cdot i+3 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+3 right) -t_1^ left(2cdot i+3 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              For $frac{d^3}{dt^3}textrm{f}(0)$:
              $$
              begin{align*}
              frac{1}{24}cdotpfrac{picdotp n}{T}cdotp left(t_2^4-t_1^4right)-
              frac{1}{18}cdotpleft(frac{picdotp n}{T}right)^2cdotpleft(t_2^6-t_1^6right)+dots&:&frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ 1cdot 2cdot 3cdotleft(2cdot i+4 right)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+4 right) -t_1^ left(2cdot i+4 right)right) }{T^ left(2cdot i+1 right)}
              end{align*}
              $$
              and so on.
              Finally overall sequence for $frac{d^m}{dt^m}$ is computed as:
              $$
              frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}
              $$
              Now we can find sum using CAS:
              $$
              textrm{Ct}(n,m)=sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ m!cdotleft(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}
              $$
              and $textrm{Ct}(n,m)$ becomes quite complex expression containing Lommel's or hypergeometric functions.
              In particular, when $m=0$ function becomes
              $$
              textrm{Ct}(n,0)=frac{sinleft(frac{2pi ncdot t_2}{T})right)-sinleft(frac{2pi n cdot t_1}{T}right)}{pi n}
              $$
              for $m=1$:
              $$
              textrm{Ct}(n,1)= frac
              {2pi n
              left(
              sinleft(frac{2pi ncdot t_2}{T})right)cdot t_2-sinleft(frac{2pi ncdot t_1}{T}right)cdot t_1
              right)
              +Tcdot
              left(
              cosleft(frac{2pi ncdot t_2}{T})right)-cosleft(frac{2pi ncdot t_1}{T}right)
              right)
              }
              { 2cdot(pi n)^2 }
              $$



              and so on.



              So we can write expression for $a_n$:
              $$
              a_n=sum_{m=0}^{infty}{frac{d^m}{dt^m}textrm{f}(0)cdottextrm{Ct}(n,m)}
              $$
              or
              $$
              a_n=sum_{m=0}^{infty}{frac{1}{m!}cdotfrac{d^m}{dt^m}textrm{f}(0)cdotleft(sum_{i=0}^{infty}{left(frac{left(-1 right)^icdot 2^ left(2cdot i+1 right)cdot n^ left(2cdot i right)}{ left(1+m+2cdot iright)cdot left(2cdot i right)! }cdot frac {pi^ left(2cdot i right)cdot left(t_2^ left(2cdot i+m+1 right) -t_1^ left(2cdot i+m+1 right)right) }{T^ left(2cdot i+1 right)}right)}right)}
              $$
              We can easy see that $frac{1}{m!}$ is Taylor's coefficient, thus relationship between the Fourier coefficients and the coefficient in the Taylor expansion using special functions is established.
              In the particular case of full wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ we can write for $textrm{Ct}(n,m)$ more simple closed form:
              $$
              frac{
              left(-1 right)^n T^m
              left(
              left(pi nright)^ left(-m-frac{1}{2} right)left(textrm{I}cdottextrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, -pi n right)-textrm{L}_{textrm{S1}}left(m+frac{3}{2}, frac{1}{2}, pi n right)right)+1+left(-1 right)^m right)
              }
              {2^ {m}cdotleft(m+1 right)!}
              $$
              where $textrm{L}_{textrm{S1}}(mu,nu,z)=textrm{s}_{mu,nu}(z)$ is first Lommel function and $textrm{I}=(-1)^{2^{-1}}$ (Complex $a_n$ coefficient).
              For example let consider parabolic signal with period $T$: $textrm{g}(t)=Acdot t^2+Bcdot t + C$.
              Coefficients $a_n$ can be found using Fourier formula:
              $$
              a_n=frac{2}{T}cdotpint_{-T/2}^{T/2}{left(Acdot t^2+Bcdot t + Cright)cdotpcosleft({frac{2picdotp n cdotp t}{T}}right),dt} = Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              For $textrm{g}(t)$ function: $textrm{g}'(t)=2Acdot t + B$ , $textrm{g}''(t)=2A$, derivatives greater than two is zero. So we can use $textrm{Ct}(n,m)$ for $m=0,1,2$. It is easy to check if $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ then $textrm{Ct}(n,0) = 0,textrm{Ct}(n,1)=0$ we have zero values for odd and non zero for even values, in particular, whenfor $m=2$:



              $$
              textrm{Ct}(n,2) = frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n
              $$
              for $m=4$ as example $textrm{Ct}(n,4) = frac{1}{48}left(frac{T}{pi n}right)^4cdotleft((pi n)^2-6right)cdot(-1)^n
              $.
              Finally we can obtain for example for $m$ up to 4:



              $$
              begin{align*}
              a_n=sum_{m=0}^{4}{frac{d^m}{dt^m}textrm{g}(0)cdottextrm{Ct}(n,m)}=&\
              =left(Acdot 0^2+Bcdot 0 + Cright)cdot 0+left(2Acdot 0+Bright)cdot 0+frac{1}{2}left(frac{T}{pi n}right)^2cdot(-1)^n cdot 2A+0cdot0+...cdot 0=\
              =Acdotleft(frac{T}{pi n}right)^2cdot(-1)^n
              end{align*}
              $$
              the same result using Fourier integral. There is interesting result for non-integer harmonic for $textrm{g}(t)$ is:
              $$
              a_n=frac{1}{120}Acdot T^2 cdot (-1)^{n+1} left({}_{2}textrm{
              F}_1left( 1, 3; frac{7}{2}; -left(frac{pi n}{2}right)^2right)cdot(pi n)^2-20right)
              $$
              where ${}_{2}textrm{
              F}_1left(a,b;c;zright)$ is hypergeometric function.
              So we can plot coefficients calculated from Fourier integral and for this special result (for $T=1,,A=1$):
              Cosine series. Red circles and red line is Fourier cosine coefficients (for real part of non-integer power of -1), solid blue line is real part of obtained expression with hypergeometric function and dash green line is imaginary. For integer $n$ imaginary part is zero and $a_n$ is real.
              Similar expressions can be obtained for the sine $b_n$ series.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Aug 9 '16 at 22:00









              Timur Zhoraev

              19426




              19426








              • 3




                I am afraid I can't see the forest for the trees. Could you please give the general idea what you are trying to achieve with all those transformations and how it answers the question. Thank you.
                – vonjd
                Aug 10 '16 at 6:14






              • 2




                As stated by Qiaochu, in fact, we have a set of numbers for the speed and acceleration, and a set of numbers for the sine and cosine coefficients (harmonics) for process description. In the first case, for example, you can achieve good approximation for free-fall parabolic movement, in the second - if this movement is due to vibrations, such as a pendulum or a spring. The idea is to find a function that directly converts vector of numerical values of speed, acceleration (and higher derivatives) to harmonics (sine and cosine coeffs. or phases/amplitudes): that is Lommel function as shown.
                – Timur Zhoraev
                Aug 10 '16 at 7:35














              • 3




                I am afraid I can't see the forest for the trees. Could you please give the general idea what you are trying to achieve with all those transformations and how it answers the question. Thank you.
                – vonjd
                Aug 10 '16 at 6:14






              • 2




                As stated by Qiaochu, in fact, we have a set of numbers for the speed and acceleration, and a set of numbers for the sine and cosine coefficients (harmonics) for process description. In the first case, for example, you can achieve good approximation for free-fall parabolic movement, in the second - if this movement is due to vibrations, such as a pendulum or a spring. The idea is to find a function that directly converts vector of numerical values of speed, acceleration (and higher derivatives) to harmonics (sine and cosine coeffs. or phases/amplitudes): that is Lommel function as shown.
                – Timur Zhoraev
                Aug 10 '16 at 7:35








              3




              3




              I am afraid I can't see the forest for the trees. Could you please give the general idea what you are trying to achieve with all those transformations and how it answers the question. Thank you.
              – vonjd
              Aug 10 '16 at 6:14




              I am afraid I can't see the forest for the trees. Could you please give the general idea what you are trying to achieve with all those transformations and how it answers the question. Thank you.
              – vonjd
              Aug 10 '16 at 6:14




              2




              2




              As stated by Qiaochu, in fact, we have a set of numbers for the speed and acceleration, and a set of numbers for the sine and cosine coefficients (harmonics) for process description. In the first case, for example, you can achieve good approximation for free-fall parabolic movement, in the second - if this movement is due to vibrations, such as a pendulum or a spring. The idea is to find a function that directly converts vector of numerical values of speed, acceleration (and higher derivatives) to harmonics (sine and cosine coeffs. or phases/amplitudes): that is Lommel function as shown.
              – Timur Zhoraev
              Aug 10 '16 at 7:35




              As stated by Qiaochu, in fact, we have a set of numbers for the speed and acceleration, and a set of numbers for the sine and cosine coefficients (harmonics) for process description. In the first case, for example, you can achieve good approximation for free-fall parabolic movement, in the second - if this movement is due to vibrations, such as a pendulum or a spring. The idea is to find a function that directly converts vector of numerical values of speed, acceleration (and higher derivatives) to harmonics (sine and cosine coeffs. or phases/amplitudes): that is Lommel function as shown.
              – Timur Zhoraev
              Aug 10 '16 at 7:35










              up vote
              5
              down vote













              I think that the missing link that connects the Fourier transform to the Taylor series expansion is Euler's formula, $e^{jmath x}=cos(x) +jmath sin(x)$. This celebrated formula establishes a relationship between trigonometric functions of real entities and exponential functions of complex (i.e. imaginary) entities. In doing so, it establishes a fundamental connection with the Fourier transform, which is essentially trigonometric in nature. In fact, the Fourier transform for a function $f(t)$ is, by definition, a Fourier series of $f(t)$ interpreted as $trightarrow infty$, and a Fourier series is, by definition, a linear summation of sin and cos functions. The Taylor Series comes into play in the derivation of Euler's formula.



              As previously mentioned, Euler's formula states $e^{jmath x}=cos(x) +jmath sin(x)$. Hence, it's acceptable to conceptually superimpose the conventional (x, y) unit circle and the real-complex plane, as they both portray the polar Eulerian expression on the continuous interval from $0$ to $2pi$. We therefore, evaluating Euler's formula at $x=2pi$, arrive at
              $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$
              This expression is intrinsically intertwined with the nature of a Fourier transform because Fourier transforms aim to convert a function from the time domain to the frequency domain. To do so, we decompose periodic functions into simple, linear sums of sin and cos, and allow this new function to approach infinity.



              So, where does the Taylor series fit into all this? We use the Taylor series expansion (or more precisely, the McLauren series expansion) to obtain Euler's formula. Here's the proof (or derivative, depending on your perspective):



              Proposition: For any complex number $z$, where $z=x+jmath y=x+sqrt{-1}*y$, i.e. $x=Re {z } $ and $y=Im {z } $, it can be said that, for real values $x$,



              $$ e^z=e^{jpi} = cos(x)+jmath sin(x)$$



              Lemma 1: The Taylor series expansion of an infinitely differentiable function $f(x)$ in the neighborhood $a$ is defined
              $$f(x)=sum_{k=0}^{infty} frac{f^k(a)}{k!}(x-a)^k$$
              We'll say that we're working in the neighborhood $a$ surrounding $theta$. So, when $theta = 0$, we get a special case of the Taylor series, called the McLauren series. The expansion of the Taylor (or McLauren) series for $sin(x)$ and $cos(x)$ are
              $$ sin (theta)=sum_{k=0}^{infty} frac{d^{k} sin(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = theta - frac{theta^3}{3!} + frac{theta^5}{5!} - frac{theta^7}{7!} ...$$
              $$ cos (theta)=sum_{k=0}^{infty} frac{d^{k} cos(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 - frac{theta^2}{2!} + frac{theta^4}{4!} - frac{theta^6}{6!} ...$$
              This is not a surprise, as we know that sine and cosine are odd and even functions, respectively. We've now accounted for the trigonometric portions of $e^z$, but have not yet addressed the complex component.



              Lemma 2: We apply the previously defined Taylor (more specifically McLauren) series expansion to the function $e^z$ when $z=jmath x$, and we get
              $$e^{jmath x} =sum_{k=0}^{infty} frac{d^{k} e^{jmath x}}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 + jmath x -
              $$



              Comparing the right-hand terms from the Taylor series expansions of $ cos (theta)$, $sin (theta)$, and $e^{jmath theta}$ performed in Lemmas 1 and 2, we see that summing the expansions from Lemma 1, we arrive, by the procedure I'm too lazy to type out in LaTeX but not too lazy to explain via my good friends Google and krotz, at
              $e^{jmath x}=cos(x) +jmath sin(x)$. [Q.E.D.]



              Now that we've proven Euler's formula, we can evaluate it at $x=2pi$ to acquire



              $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$



              Due to the characteristics of the sin and cos functions, it is possible to use simple integration to recover the amplitude of each sin and cos wave represented in a Fourier transform (similar to the reverse of the above proof). In an overwhelming majority of cases, it's highly useful to select Euler's formula as the function to integrate over. Because Euler's formula and the Fourier transform are both (at least, in part) fundamentally trigonometric in nature, the use of Euler's formula greatly simplifies most of the real portions of the Fourier analyses. Additionally, for the complex case, frequencies can be represented inverse-temporally using a combination of Euler's formula and the Fourier series expansion.



              So, it's a bit messy and convoluted (etymologically, not integrally), but it really boils down to the fact that the Taylor (or McLauren) series, the Fourier series and transform, and Euler's formula all relate a trigonometrically
              The differences between the three arise by nature of application. Taylor series are used to represent functions as infinite sums of their derivatives. Fourier series and transforms are used in linear systems &/or differential equations to convert signals or DEs from the time to frequency domain. Euler's formula is used to relate trigonometric and complex exponential (complexponential?!) funcions, and is also a formula that, when evaluated at $x=pi$, yields Euler's identity, $e^{jmath pi}+1=0$, an equation so austerely eloquent and aesthetically arousing that I'd be down to stare at all day.






              share|cite|improve this answer

























                up vote
                5
                down vote













                I think that the missing link that connects the Fourier transform to the Taylor series expansion is Euler's formula, $e^{jmath x}=cos(x) +jmath sin(x)$. This celebrated formula establishes a relationship between trigonometric functions of real entities and exponential functions of complex (i.e. imaginary) entities. In doing so, it establishes a fundamental connection with the Fourier transform, which is essentially trigonometric in nature. In fact, the Fourier transform for a function $f(t)$ is, by definition, a Fourier series of $f(t)$ interpreted as $trightarrow infty$, and a Fourier series is, by definition, a linear summation of sin and cos functions. The Taylor Series comes into play in the derivation of Euler's formula.



                As previously mentioned, Euler's formula states $e^{jmath x}=cos(x) +jmath sin(x)$. Hence, it's acceptable to conceptually superimpose the conventional (x, y) unit circle and the real-complex plane, as they both portray the polar Eulerian expression on the continuous interval from $0$ to $2pi$. We therefore, evaluating Euler's formula at $x=2pi$, arrive at
                $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$
                This expression is intrinsically intertwined with the nature of a Fourier transform because Fourier transforms aim to convert a function from the time domain to the frequency domain. To do so, we decompose periodic functions into simple, linear sums of sin and cos, and allow this new function to approach infinity.



                So, where does the Taylor series fit into all this? We use the Taylor series expansion (or more precisely, the McLauren series expansion) to obtain Euler's formula. Here's the proof (or derivative, depending on your perspective):



                Proposition: For any complex number $z$, where $z=x+jmath y=x+sqrt{-1}*y$, i.e. $x=Re {z } $ and $y=Im {z } $, it can be said that, for real values $x$,



                $$ e^z=e^{jpi} = cos(x)+jmath sin(x)$$



                Lemma 1: The Taylor series expansion of an infinitely differentiable function $f(x)$ in the neighborhood $a$ is defined
                $$f(x)=sum_{k=0}^{infty} frac{f^k(a)}{k!}(x-a)^k$$
                We'll say that we're working in the neighborhood $a$ surrounding $theta$. So, when $theta = 0$, we get a special case of the Taylor series, called the McLauren series. The expansion of the Taylor (or McLauren) series for $sin(x)$ and $cos(x)$ are
                $$ sin (theta)=sum_{k=0}^{infty} frac{d^{k} sin(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = theta - frac{theta^3}{3!} + frac{theta^5}{5!} - frac{theta^7}{7!} ...$$
                $$ cos (theta)=sum_{k=0}^{infty} frac{d^{k} cos(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 - frac{theta^2}{2!} + frac{theta^4}{4!} - frac{theta^6}{6!} ...$$
                This is not a surprise, as we know that sine and cosine are odd and even functions, respectively. We've now accounted for the trigonometric portions of $e^z$, but have not yet addressed the complex component.



                Lemma 2: We apply the previously defined Taylor (more specifically McLauren) series expansion to the function $e^z$ when $z=jmath x$, and we get
                $$e^{jmath x} =sum_{k=0}^{infty} frac{d^{k} e^{jmath x}}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 + jmath x -
                $$



                Comparing the right-hand terms from the Taylor series expansions of $ cos (theta)$, $sin (theta)$, and $e^{jmath theta}$ performed in Lemmas 1 and 2, we see that summing the expansions from Lemma 1, we arrive, by the procedure I'm too lazy to type out in LaTeX but not too lazy to explain via my good friends Google and krotz, at
                $e^{jmath x}=cos(x) +jmath sin(x)$. [Q.E.D.]



                Now that we've proven Euler's formula, we can evaluate it at $x=2pi$ to acquire



                $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$



                Due to the characteristics of the sin and cos functions, it is possible to use simple integration to recover the amplitude of each sin and cos wave represented in a Fourier transform (similar to the reverse of the above proof). In an overwhelming majority of cases, it's highly useful to select Euler's formula as the function to integrate over. Because Euler's formula and the Fourier transform are both (at least, in part) fundamentally trigonometric in nature, the use of Euler's formula greatly simplifies most of the real portions of the Fourier analyses. Additionally, for the complex case, frequencies can be represented inverse-temporally using a combination of Euler's formula and the Fourier series expansion.



                So, it's a bit messy and convoluted (etymologically, not integrally), but it really boils down to the fact that the Taylor (or McLauren) series, the Fourier series and transform, and Euler's formula all relate a trigonometrically
                The differences between the three arise by nature of application. Taylor series are used to represent functions as infinite sums of their derivatives. Fourier series and transforms are used in linear systems &/or differential equations to convert signals or DEs from the time to frequency domain. Euler's formula is used to relate trigonometric and complex exponential (complexponential?!) funcions, and is also a formula that, when evaluated at $x=pi$, yields Euler's identity, $e^{jmath pi}+1=0$, an equation so austerely eloquent and aesthetically arousing that I'd be down to stare at all day.






                share|cite|improve this answer























                  up vote
                  5
                  down vote










                  up vote
                  5
                  down vote









                  I think that the missing link that connects the Fourier transform to the Taylor series expansion is Euler's formula, $e^{jmath x}=cos(x) +jmath sin(x)$. This celebrated formula establishes a relationship between trigonometric functions of real entities and exponential functions of complex (i.e. imaginary) entities. In doing so, it establishes a fundamental connection with the Fourier transform, which is essentially trigonometric in nature. In fact, the Fourier transform for a function $f(t)$ is, by definition, a Fourier series of $f(t)$ interpreted as $trightarrow infty$, and a Fourier series is, by definition, a linear summation of sin and cos functions. The Taylor Series comes into play in the derivation of Euler's formula.



                  As previously mentioned, Euler's formula states $e^{jmath x}=cos(x) +jmath sin(x)$. Hence, it's acceptable to conceptually superimpose the conventional (x, y) unit circle and the real-complex plane, as they both portray the polar Eulerian expression on the continuous interval from $0$ to $2pi$. We therefore, evaluating Euler's formula at $x=2pi$, arrive at
                  $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$
                  This expression is intrinsically intertwined with the nature of a Fourier transform because Fourier transforms aim to convert a function from the time domain to the frequency domain. To do so, we decompose periodic functions into simple, linear sums of sin and cos, and allow this new function to approach infinity.



                  So, where does the Taylor series fit into all this? We use the Taylor series expansion (or more precisely, the McLauren series expansion) to obtain Euler's formula. Here's the proof (or derivative, depending on your perspective):



                  Proposition: For any complex number $z$, where $z=x+jmath y=x+sqrt{-1}*y$, i.e. $x=Re {z } $ and $y=Im {z } $, it can be said that, for real values $x$,



                  $$ e^z=e^{jpi} = cos(x)+jmath sin(x)$$



                  Lemma 1: The Taylor series expansion of an infinitely differentiable function $f(x)$ in the neighborhood $a$ is defined
                  $$f(x)=sum_{k=0}^{infty} frac{f^k(a)}{k!}(x-a)^k$$
                  We'll say that we're working in the neighborhood $a$ surrounding $theta$. So, when $theta = 0$, we get a special case of the Taylor series, called the McLauren series. The expansion of the Taylor (or McLauren) series for $sin(x)$ and $cos(x)$ are
                  $$ sin (theta)=sum_{k=0}^{infty} frac{d^{k} sin(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = theta - frac{theta^3}{3!} + frac{theta^5}{5!} - frac{theta^7}{7!} ...$$
                  $$ cos (theta)=sum_{k=0}^{infty} frac{d^{k} cos(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 - frac{theta^2}{2!} + frac{theta^4}{4!} - frac{theta^6}{6!} ...$$
                  This is not a surprise, as we know that sine and cosine are odd and even functions, respectively. We've now accounted for the trigonometric portions of $e^z$, but have not yet addressed the complex component.



                  Lemma 2: We apply the previously defined Taylor (more specifically McLauren) series expansion to the function $e^z$ when $z=jmath x$, and we get
                  $$e^{jmath x} =sum_{k=0}^{infty} frac{d^{k} e^{jmath x}}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 + jmath x -
                  $$



                  Comparing the right-hand terms from the Taylor series expansions of $ cos (theta)$, $sin (theta)$, and $e^{jmath theta}$ performed in Lemmas 1 and 2, we see that summing the expansions from Lemma 1, we arrive, by the procedure I'm too lazy to type out in LaTeX but not too lazy to explain via my good friends Google and krotz, at
                  $e^{jmath x}=cos(x) +jmath sin(x)$. [Q.E.D.]



                  Now that we've proven Euler's formula, we can evaluate it at $x=2pi$ to acquire



                  $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$



                  Due to the characteristics of the sin and cos functions, it is possible to use simple integration to recover the amplitude of each sin and cos wave represented in a Fourier transform (similar to the reverse of the above proof). In an overwhelming majority of cases, it's highly useful to select Euler's formula as the function to integrate over. Because Euler's formula and the Fourier transform are both (at least, in part) fundamentally trigonometric in nature, the use of Euler's formula greatly simplifies most of the real portions of the Fourier analyses. Additionally, for the complex case, frequencies can be represented inverse-temporally using a combination of Euler's formula and the Fourier series expansion.



                  So, it's a bit messy and convoluted (etymologically, not integrally), but it really boils down to the fact that the Taylor (or McLauren) series, the Fourier series and transform, and Euler's formula all relate a trigonometrically
                  The differences between the three arise by nature of application. Taylor series are used to represent functions as infinite sums of their derivatives. Fourier series and transforms are used in linear systems &/or differential equations to convert signals or DEs from the time to frequency domain. Euler's formula is used to relate trigonometric and complex exponential (complexponential?!) funcions, and is also a formula that, when evaluated at $x=pi$, yields Euler's identity, $e^{jmath pi}+1=0$, an equation so austerely eloquent and aesthetically arousing that I'd be down to stare at all day.






                  share|cite|improve this answer












                  I think that the missing link that connects the Fourier transform to the Taylor series expansion is Euler's formula, $e^{jmath x}=cos(x) +jmath sin(x)$. This celebrated formula establishes a relationship between trigonometric functions of real entities and exponential functions of complex (i.e. imaginary) entities. In doing so, it establishes a fundamental connection with the Fourier transform, which is essentially trigonometric in nature. In fact, the Fourier transform for a function $f(t)$ is, by definition, a Fourier series of $f(t)$ interpreted as $trightarrow infty$, and a Fourier series is, by definition, a linear summation of sin and cos functions. The Taylor Series comes into play in the derivation of Euler's formula.



                  As previously mentioned, Euler's formula states $e^{jmath x}=cos(x) +jmath sin(x)$. Hence, it's acceptable to conceptually superimpose the conventional (x, y) unit circle and the real-complex plane, as they both portray the polar Eulerian expression on the continuous interval from $0$ to $2pi$. We therefore, evaluating Euler's formula at $x=2pi$, arrive at
                  $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$
                  This expression is intrinsically intertwined with the nature of a Fourier transform because Fourier transforms aim to convert a function from the time domain to the frequency domain. To do so, we decompose periodic functions into simple, linear sums of sin and cos, and allow this new function to approach infinity.



                  So, where does the Taylor series fit into all this? We use the Taylor series expansion (or more precisely, the McLauren series expansion) to obtain Euler's formula. Here's the proof (or derivative, depending on your perspective):



                  Proposition: For any complex number $z$, where $z=x+jmath y=x+sqrt{-1}*y$, i.e. $x=Re {z } $ and $y=Im {z } $, it can be said that, for real values $x$,



                  $$ e^z=e^{jpi} = cos(x)+jmath sin(x)$$



                  Lemma 1: The Taylor series expansion of an infinitely differentiable function $f(x)$ in the neighborhood $a$ is defined
                  $$f(x)=sum_{k=0}^{infty} frac{f^k(a)}{k!}(x-a)^k$$
                  We'll say that we're working in the neighborhood $a$ surrounding $theta$. So, when $theta = 0$, we get a special case of the Taylor series, called the McLauren series. The expansion of the Taylor (or McLauren) series for $sin(x)$ and $cos(x)$ are
                  $$ sin (theta)=sum_{k=0}^{infty} frac{d^{k} sin(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = theta - frac{theta^3}{3!} + frac{theta^5}{5!} - frac{theta^7}{7!} ...$$
                  $$ cos (theta)=sum_{k=0}^{infty} frac{d^{k} cos(theta)}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 - frac{theta^2}{2!} + frac{theta^4}{4!} - frac{theta^6}{6!} ...$$
                  This is not a surprise, as we know that sine and cosine are odd and even functions, respectively. We've now accounted for the trigonometric portions of $e^z$, but have not yet addressed the complex component.



                  Lemma 2: We apply the previously defined Taylor (more specifically McLauren) series expansion to the function $e^z$ when $z=jmath x$, and we get
                  $$e^{jmath x} =sum_{k=0}^{infty} frac{d^{k} e^{jmath x}}{dtheta^k} mid _{theta=0} = sum_{k=0}^{infty}frac{theta^k}{k!} = 1 + jmath x -
                  $$



                  Comparing the right-hand terms from the Taylor series expansions of $ cos (theta)$, $sin (theta)$, and $e^{jmath theta}$ performed in Lemmas 1 and 2, we see that summing the expansions from Lemma 1, we arrive, by the procedure I'm too lazy to type out in LaTeX but not too lazy to explain via my good friends Google and krotz, at
                  $e^{jmath x}=cos(x) +jmath sin(x)$. [Q.E.D.]



                  Now that we've proven Euler's formula, we can evaluate it at $x=2pi$ to acquire



                  $$e^{2pi*jmath theta } = {cos(}2pitheta)+j*{sin}(2pitheta)$$



                  Due to the characteristics of the sin and cos functions, it is possible to use simple integration to recover the amplitude of each sin and cos wave represented in a Fourier transform (similar to the reverse of the above proof). In an overwhelming majority of cases, it's highly useful to select Euler's formula as the function to integrate over. Because Euler's formula and the Fourier transform are both (at least, in part) fundamentally trigonometric in nature, the use of Euler's formula greatly simplifies most of the real portions of the Fourier analyses. Additionally, for the complex case, frequencies can be represented inverse-temporally using a combination of Euler's formula and the Fourier series expansion.



                  So, it's a bit messy and convoluted (etymologically, not integrally), but it really boils down to the fact that the Taylor (or McLauren) series, the Fourier series and transform, and Euler's formula all relate a trigonometrically
                  The differences between the three arise by nature of application. Taylor series are used to represent functions as infinite sums of their derivatives. Fourier series and transforms are used in linear systems &/or differential equations to convert signals or DEs from the time to frequency domain. Euler's formula is used to relate trigonometric and complex exponential (complexponential?!) funcions, and is also a formula that, when evaluated at $x=pi$, yields Euler's identity, $e^{jmath pi}+1=0$, an equation so austerely eloquent and aesthetically arousing that I'd be down to stare at all day.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Feb 23 '15 at 16:05









                  tajcook93

                  5112




                  5112






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f7301%2fconnection-between-fourier-transform-and-taylor-series%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Plaza Victoria

                      In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

                      How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...