Fastest algorithm to decide whether a (always halting) TM accepts a general string












5












$begingroup$


Given a TM $M$ that halts on all inputs, and a general string $w$, consider the most trivial algorithm (Call it $A$) to decide whether $M$ accepts $w$:



$A$ simply simulates $M$ on $w$ and answer what $M$ answers.



The question here is, can this be proven to be the fastest algorithm to do the job?



(I mean, it's quite clear there could not be a faster one. Or could it?)



And more formally and clear:



Is there an algorithm $A'$, that for every input $langle M,wrangle$ satisfies:




  1. If $M$ is a TM that halts on all inputs, $A'$ will return what $M$ returns with input $w$.


  2. $A'$ is faster than $A$.











share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    There are (theoretically) infinitely many algorithms faster than that, each faster than the previous one. See, for example, this.
    $endgroup$
    – dkaeae
    Apr 1 at 14:02










  • $begingroup$
    @dkaeae Does that mean we can infinitely make any algorithm faster?
    $endgroup$
    – FireCubez
    Apr 1 at 16:41






  • 1




    $begingroup$
    @FireCubez In the technical sense of TMs, and for a particular meaning of infinity, yes. In the sense of algorithms running on real computers, no.
    $endgroup$
    – rlms
    Apr 1 at 19:19


















5












$begingroup$


Given a TM $M$ that halts on all inputs, and a general string $w$, consider the most trivial algorithm (Call it $A$) to decide whether $M$ accepts $w$:



$A$ simply simulates $M$ on $w$ and answer what $M$ answers.



The question here is, can this be proven to be the fastest algorithm to do the job?



(I mean, it's quite clear there could not be a faster one. Or could it?)



And more formally and clear:



Is there an algorithm $A'$, that for every input $langle M,wrangle$ satisfies:




  1. If $M$ is a TM that halts on all inputs, $A'$ will return what $M$ returns with input $w$.


  2. $A'$ is faster than $A$.











share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    There are (theoretically) infinitely many algorithms faster than that, each faster than the previous one. See, for example, this.
    $endgroup$
    – dkaeae
    Apr 1 at 14:02










  • $begingroup$
    @dkaeae Does that mean we can infinitely make any algorithm faster?
    $endgroup$
    – FireCubez
    Apr 1 at 16:41






  • 1




    $begingroup$
    @FireCubez In the technical sense of TMs, and for a particular meaning of infinity, yes. In the sense of algorithms running on real computers, no.
    $endgroup$
    – rlms
    Apr 1 at 19:19
















5












5








5





$begingroup$


Given a TM $M$ that halts on all inputs, and a general string $w$, consider the most trivial algorithm (Call it $A$) to decide whether $M$ accepts $w$:



$A$ simply simulates $M$ on $w$ and answer what $M$ answers.



The question here is, can this be proven to be the fastest algorithm to do the job?



(I mean, it's quite clear there could not be a faster one. Or could it?)



And more formally and clear:



Is there an algorithm $A'$, that for every input $langle M,wrangle$ satisfies:




  1. If $M$ is a TM that halts on all inputs, $A'$ will return what $M$ returns with input $w$.


  2. $A'$ is faster than $A$.











share|cite|improve this question











$endgroup$




Given a TM $M$ that halts on all inputs, and a general string $w$, consider the most trivial algorithm (Call it $A$) to decide whether $M$ accepts $w$:



$A$ simply simulates $M$ on $w$ and answer what $M$ answers.



The question here is, can this be proven to be the fastest algorithm to do the job?



(I mean, it's quite clear there could not be a faster one. Or could it?)



And more formally and clear:



Is there an algorithm $A'$, that for every input $langle M,wrangle$ satisfies:




  1. If $M$ is a TM that halts on all inputs, $A'$ will return what $M$ returns with input $w$.


  2. $A'$ is faster than $A$.








turing-machines time-complexity






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Apr 3 at 3:50









xskxzr

4,18921033




4,18921033










asked Apr 1 at 13:55









OrenOren

375




375








  • 2




    $begingroup$
    There are (theoretically) infinitely many algorithms faster than that, each faster than the previous one. See, for example, this.
    $endgroup$
    – dkaeae
    Apr 1 at 14:02










  • $begingroup$
    @dkaeae Does that mean we can infinitely make any algorithm faster?
    $endgroup$
    – FireCubez
    Apr 1 at 16:41






  • 1




    $begingroup$
    @FireCubez In the technical sense of TMs, and for a particular meaning of infinity, yes. In the sense of algorithms running on real computers, no.
    $endgroup$
    – rlms
    Apr 1 at 19:19
















  • 2




    $begingroup$
    There are (theoretically) infinitely many algorithms faster than that, each faster than the previous one. See, for example, this.
    $endgroup$
    – dkaeae
    Apr 1 at 14:02










  • $begingroup$
    @dkaeae Does that mean we can infinitely make any algorithm faster?
    $endgroup$
    – FireCubez
    Apr 1 at 16:41






  • 1




    $begingroup$
    @FireCubez In the technical sense of TMs, and for a particular meaning of infinity, yes. In the sense of algorithms running on real computers, no.
    $endgroup$
    – rlms
    Apr 1 at 19:19










2




2




$begingroup$
There are (theoretically) infinitely many algorithms faster than that, each faster than the previous one. See, for example, this.
$endgroup$
– dkaeae
Apr 1 at 14:02




$begingroup$
There are (theoretically) infinitely many algorithms faster than that, each faster than the previous one. See, for example, this.
$endgroup$
– dkaeae
Apr 1 at 14:02












$begingroup$
@dkaeae Does that mean we can infinitely make any algorithm faster?
$endgroup$
– FireCubez
Apr 1 at 16:41




$begingroup$
@dkaeae Does that mean we can infinitely make any algorithm faster?
$endgroup$
– FireCubez
Apr 1 at 16:41




1




1




$begingroup$
@FireCubez In the technical sense of TMs, and for a particular meaning of infinity, yes. In the sense of algorithms running on real computers, no.
$endgroup$
– rlms
Apr 1 at 19:19






$begingroup$
@FireCubez In the technical sense of TMs, and for a particular meaning of infinity, yes. In the sense of algorithms running on real computers, no.
$endgroup$
– rlms
Apr 1 at 19:19












3 Answers
3






active

oldest

votes


















3












$begingroup$


Is there an algorithm $A′$, that for every input $langle M,wrangle$ satisfies:



1) If $M$ is a TM that halts on all inputs, $A′$ will return what $M$ returns with input $w$.



2) $A′$ is faster than $A$ (In worst case terms)




It's not possible to be asymptotically faster by more than a log factor. By the time hierarchy theorem, for any reasonable function $f$, there are problems that can be solved in $f(n)$ steps that cannot be solved in $o(f(n)/log n)$ steps.



Other answers point out that you can get faster by any constant factor by the linear speedup theorem which, roughly speaking, simulates a factor of $c$ faster by simulating $c$ steps of the Turing machine's operation at once.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    So for input M that runs in exponential time in worst case, does this mean that the (asymptotically) fastest algorithm (or family of algorithms) to improve A must be exponential two in worst case on the set of inputs that include this M with general string $w$?
    $endgroup$
    – Oren
    Apr 1 at 16:36












  • $begingroup$
    @Oren Exactly, yes. In particular, this is how we know that $mathrm{EXP}neqmathrm{P}$: it tells us there can be no polynomial-time algorithm for an $mathrm{EXP}$-complete problem.
    $endgroup$
    – David Richerby
    Apr 1 at 17:00












  • $begingroup$
    hey can you extend of the use of the time hierarchy theorem? It is still unclear to me as to why the specific algorithm can be reduced only by log factor as the theorem states only that exists such algorithms, though it doesn't follow immediately that A is one of them (those who can be improved only by a log factor)
    $endgroup$
    – Oren
    Apr 2 at 12:53










  • $begingroup$
    As I recall, we believe that the time hierarchy theorem ought to be strict, in the sense that there are things you can do in time $f(n)$ that can't be done in time $o(f(n))$ (analogous to the space hierarchy theorem), but that $o(f(n)/log n)$ is the best anyone's managed to prove. You can't keep shaving off log-factors since, if you managed to do it even twice, you'd be at roughly $f(n)/(log n)^2$, which is less than $f(n)/log n$.
    $endgroup$
    – David Richerby
    Apr 2 at 13:11










  • $begingroup$
    Ok. still why is $A$ one of those algorithms (those that you can do in time $f(n)$ but not in time $o(f(n)/logn)$)?
    $endgroup$
    – Oren
    Apr 2 at 13:25





















4












$begingroup$

Dkaeae brought up a very useful trick in the comments: the Linear Speedup Theorem. Effectively, it says:




For any positive $k$, there's a mechanical transformation you can do to any Turing machine, which makes it run $k$ times faster.




(There's a bit more to it than that, but that's not really relevant here. Wikipedia has more details.)



So I propose the following family of algorithms (with hyperparameter $k$):



def decide(M, w):
use the Linear Speedup Theorem to turn M into M', which is k times faster
run M' on w and return the result


You can make this as fast as you want by increasing $k$: there's theoretically no limit on this. No matter how fast it runs, you can always make it faster by just making $k$ bigger.



This is why time complexity is always given in asymptotic terms (big-O and all that): constant factors are extremely easy to add and remove, so they don't really tell us anything useful about the algorithm itself. If you have an algorithm that runs in $n^5+C$ time, I can turn that into $frac{1}{1,000,000} n^5+C$, but it'll still end up slower than $1,000,000n+C$ for large enough $n$.



P.S. You might be wondering, "what's the catch?" The answer is, the Linear Speedup construction makes a machine with more states and a more convoluted instruction set. But that doesn't matter when you're talking about time complexity.






share|cite|improve this answer











$endgroup$





















    0












    $begingroup$

    Of course there is.



    Consider, for instance, a TM $T$ which reads its entire input (of length $n$) $10^{100n}$ times and then accepts. Then the TM $T'$ which instantly accepts any input is at least $10^{100n}$ times faster than any (step for step) simulation of $T$. (You may replace $10^{100n}$ with your favorite largest computable number.)



    Hence, the following algorithm $A'$ would do it:




    1. Check whether $langle M rangle = langle T rangle$. If so, then set $langle M rangle$ to $langle T' rangle$; otherwise, leave $langle M rangle$ intact.

    2. Do what $A$ does.


    It is easy to see $A'$ will now be $10^{100n}$ faster than $A$ if given $langle T, w rangle$ as input. This qualifies as a (strict) asymptotic improvement since there are infinitely many values for $w$. $A'$ only needs $O(n)$ extra steps (in step 1) before doing what $A$ does, but $A$ takes $Omega(n)$ time anyway (because it necessarily reads its entire input at least once), so $A'$ is asymptotically just as fast as $A$ on all other inputs.



    The above construction provides an improvement for one particular TM (i.e., $T$) but can be extended to be faster than $A$ for infinitely many TMs. This can be done, for instance, by defining a series $T_k$ of TMs with a parameter $k$ such that $T_k$ reads its entire input $k^{100n}$ times and accepts. The description of $T_k$ can be made such that it is recognizable by $A'$ in $O(n)$ time, as above (imagine, for instance, $langle T_k rangle$ being the exact same piece of code where $k$ is declared as a constant).






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      I get the answer from the comment above, really cool by the way, i didn't know that.. but this example is for a specific TM. I mean if $T'$ gets input of, for example, a TM that reject all inputs immediately, it won't work. or am I getting this wrong?
      $endgroup$
      – Oren
      Apr 1 at 14:13










    • $begingroup$
      If you are trying to prove a statement of the form $forall x: A(x)$ wrong, then you only need to provide an $x$ which falsifies $A(x)$. (Here, $x$ is $T$ and $A(x)$ is the statement that simulating $T$ step for step is the fastest possible algorithm.)
      $endgroup$
      – dkaeae
      Apr 1 at 14:16












    • $begingroup$
      I get the logic, but this one example is what I can't see working. can you clarify the roles of $T$ and $T'$ in the algorithm based on $M$'s role?
      $endgroup$
      – Oren
      Apr 1 at 14:20










    • $begingroup$
      $M = T$, whereas computing $T'$ (or just answering "yes") is a faster algorithm than directly simulating $T$.
      $endgroup$
      – dkaeae
      Apr 1 at 14:23






    • 1




      $begingroup$
      Though in this example it doesn't check whether $M$ accepts $w$ so it won't always be correct. I mean $T'$ will answer correctly for $T$ and will do it faster than $M$ but for other inputs that are different from $T'$, for example with input of $T''$ that rejects all inputs immediately, it will return a wrong answer, so it is an example for a fast algorithm though is one that isn't always correct
      $endgroup$
      – Oren
      Apr 1 at 14:32














    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "419"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106329%2ffastest-algorithm-to-decide-whether-a-always-halting-tm-accepts-a-general-stri%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$


    Is there an algorithm $A′$, that for every input $langle M,wrangle$ satisfies:



    1) If $M$ is a TM that halts on all inputs, $A′$ will return what $M$ returns with input $w$.



    2) $A′$ is faster than $A$ (In worst case terms)




    It's not possible to be asymptotically faster by more than a log factor. By the time hierarchy theorem, for any reasonable function $f$, there are problems that can be solved in $f(n)$ steps that cannot be solved in $o(f(n)/log n)$ steps.



    Other answers point out that you can get faster by any constant factor by the linear speedup theorem which, roughly speaking, simulates a factor of $c$ faster by simulating $c$ steps of the Turing machine's operation at once.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      So for input M that runs in exponential time in worst case, does this mean that the (asymptotically) fastest algorithm (or family of algorithms) to improve A must be exponential two in worst case on the set of inputs that include this M with general string $w$?
      $endgroup$
      – Oren
      Apr 1 at 16:36












    • $begingroup$
      @Oren Exactly, yes. In particular, this is how we know that $mathrm{EXP}neqmathrm{P}$: it tells us there can be no polynomial-time algorithm for an $mathrm{EXP}$-complete problem.
      $endgroup$
      – David Richerby
      Apr 1 at 17:00












    • $begingroup$
      hey can you extend of the use of the time hierarchy theorem? It is still unclear to me as to why the specific algorithm can be reduced only by log factor as the theorem states only that exists such algorithms, though it doesn't follow immediately that A is one of them (those who can be improved only by a log factor)
      $endgroup$
      – Oren
      Apr 2 at 12:53










    • $begingroup$
      As I recall, we believe that the time hierarchy theorem ought to be strict, in the sense that there are things you can do in time $f(n)$ that can't be done in time $o(f(n))$ (analogous to the space hierarchy theorem), but that $o(f(n)/log n)$ is the best anyone's managed to prove. You can't keep shaving off log-factors since, if you managed to do it even twice, you'd be at roughly $f(n)/(log n)^2$, which is less than $f(n)/log n$.
      $endgroup$
      – David Richerby
      Apr 2 at 13:11










    • $begingroup$
      Ok. still why is $A$ one of those algorithms (those that you can do in time $f(n)$ but not in time $o(f(n)/logn)$)?
      $endgroup$
      – Oren
      Apr 2 at 13:25


















    3












    $begingroup$


    Is there an algorithm $A′$, that for every input $langle M,wrangle$ satisfies:



    1) If $M$ is a TM that halts on all inputs, $A′$ will return what $M$ returns with input $w$.



    2) $A′$ is faster than $A$ (In worst case terms)




    It's not possible to be asymptotically faster by more than a log factor. By the time hierarchy theorem, for any reasonable function $f$, there are problems that can be solved in $f(n)$ steps that cannot be solved in $o(f(n)/log n)$ steps.



    Other answers point out that you can get faster by any constant factor by the linear speedup theorem which, roughly speaking, simulates a factor of $c$ faster by simulating $c$ steps of the Turing machine's operation at once.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      So for input M that runs in exponential time in worst case, does this mean that the (asymptotically) fastest algorithm (or family of algorithms) to improve A must be exponential two in worst case on the set of inputs that include this M with general string $w$?
      $endgroup$
      – Oren
      Apr 1 at 16:36












    • $begingroup$
      @Oren Exactly, yes. In particular, this is how we know that $mathrm{EXP}neqmathrm{P}$: it tells us there can be no polynomial-time algorithm for an $mathrm{EXP}$-complete problem.
      $endgroup$
      – David Richerby
      Apr 1 at 17:00












    • $begingroup$
      hey can you extend of the use of the time hierarchy theorem? It is still unclear to me as to why the specific algorithm can be reduced only by log factor as the theorem states only that exists such algorithms, though it doesn't follow immediately that A is one of them (those who can be improved only by a log factor)
      $endgroup$
      – Oren
      Apr 2 at 12:53










    • $begingroup$
      As I recall, we believe that the time hierarchy theorem ought to be strict, in the sense that there are things you can do in time $f(n)$ that can't be done in time $o(f(n))$ (analogous to the space hierarchy theorem), but that $o(f(n)/log n)$ is the best anyone's managed to prove. You can't keep shaving off log-factors since, if you managed to do it even twice, you'd be at roughly $f(n)/(log n)^2$, which is less than $f(n)/log n$.
      $endgroup$
      – David Richerby
      Apr 2 at 13:11










    • $begingroup$
      Ok. still why is $A$ one of those algorithms (those that you can do in time $f(n)$ but not in time $o(f(n)/logn)$)?
      $endgroup$
      – Oren
      Apr 2 at 13:25
















    3












    3








    3





    $begingroup$


    Is there an algorithm $A′$, that for every input $langle M,wrangle$ satisfies:



    1) If $M$ is a TM that halts on all inputs, $A′$ will return what $M$ returns with input $w$.



    2) $A′$ is faster than $A$ (In worst case terms)




    It's not possible to be asymptotically faster by more than a log factor. By the time hierarchy theorem, for any reasonable function $f$, there are problems that can be solved in $f(n)$ steps that cannot be solved in $o(f(n)/log n)$ steps.



    Other answers point out that you can get faster by any constant factor by the linear speedup theorem which, roughly speaking, simulates a factor of $c$ faster by simulating $c$ steps of the Turing machine's operation at once.






    share|cite|improve this answer









    $endgroup$




    Is there an algorithm $A′$, that for every input $langle M,wrangle$ satisfies:



    1) If $M$ is a TM that halts on all inputs, $A′$ will return what $M$ returns with input $w$.



    2) $A′$ is faster than $A$ (In worst case terms)




    It's not possible to be asymptotically faster by more than a log factor. By the time hierarchy theorem, for any reasonable function $f$, there are problems that can be solved in $f(n)$ steps that cannot be solved in $o(f(n)/log n)$ steps.



    Other answers point out that you can get faster by any constant factor by the linear speedup theorem which, roughly speaking, simulates a factor of $c$ faster by simulating $c$ steps of the Turing machine's operation at once.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Apr 1 at 15:33









    David RicherbyDavid Richerby

    69.7k15106195




    69.7k15106195












    • $begingroup$
      So for input M that runs in exponential time in worst case, does this mean that the (asymptotically) fastest algorithm (or family of algorithms) to improve A must be exponential two in worst case on the set of inputs that include this M with general string $w$?
      $endgroup$
      – Oren
      Apr 1 at 16:36












    • $begingroup$
      @Oren Exactly, yes. In particular, this is how we know that $mathrm{EXP}neqmathrm{P}$: it tells us there can be no polynomial-time algorithm for an $mathrm{EXP}$-complete problem.
      $endgroup$
      – David Richerby
      Apr 1 at 17:00












    • $begingroup$
      hey can you extend of the use of the time hierarchy theorem? It is still unclear to me as to why the specific algorithm can be reduced only by log factor as the theorem states only that exists such algorithms, though it doesn't follow immediately that A is one of them (those who can be improved only by a log factor)
      $endgroup$
      – Oren
      Apr 2 at 12:53










    • $begingroup$
      As I recall, we believe that the time hierarchy theorem ought to be strict, in the sense that there are things you can do in time $f(n)$ that can't be done in time $o(f(n))$ (analogous to the space hierarchy theorem), but that $o(f(n)/log n)$ is the best anyone's managed to prove. You can't keep shaving off log-factors since, if you managed to do it even twice, you'd be at roughly $f(n)/(log n)^2$, which is less than $f(n)/log n$.
      $endgroup$
      – David Richerby
      Apr 2 at 13:11










    • $begingroup$
      Ok. still why is $A$ one of those algorithms (those that you can do in time $f(n)$ but not in time $o(f(n)/logn)$)?
      $endgroup$
      – Oren
      Apr 2 at 13:25




















    • $begingroup$
      So for input M that runs in exponential time in worst case, does this mean that the (asymptotically) fastest algorithm (or family of algorithms) to improve A must be exponential two in worst case on the set of inputs that include this M with general string $w$?
      $endgroup$
      – Oren
      Apr 1 at 16:36












    • $begingroup$
      @Oren Exactly, yes. In particular, this is how we know that $mathrm{EXP}neqmathrm{P}$: it tells us there can be no polynomial-time algorithm for an $mathrm{EXP}$-complete problem.
      $endgroup$
      – David Richerby
      Apr 1 at 17:00












    • $begingroup$
      hey can you extend of the use of the time hierarchy theorem? It is still unclear to me as to why the specific algorithm can be reduced only by log factor as the theorem states only that exists such algorithms, though it doesn't follow immediately that A is one of them (those who can be improved only by a log factor)
      $endgroup$
      – Oren
      Apr 2 at 12:53










    • $begingroup$
      As I recall, we believe that the time hierarchy theorem ought to be strict, in the sense that there are things you can do in time $f(n)$ that can't be done in time $o(f(n))$ (analogous to the space hierarchy theorem), but that $o(f(n)/log n)$ is the best anyone's managed to prove. You can't keep shaving off log-factors since, if you managed to do it even twice, you'd be at roughly $f(n)/(log n)^2$, which is less than $f(n)/log n$.
      $endgroup$
      – David Richerby
      Apr 2 at 13:11










    • $begingroup$
      Ok. still why is $A$ one of those algorithms (those that you can do in time $f(n)$ but not in time $o(f(n)/logn)$)?
      $endgroup$
      – Oren
      Apr 2 at 13:25


















    $begingroup$
    So for input M that runs in exponential time in worst case, does this mean that the (asymptotically) fastest algorithm (or family of algorithms) to improve A must be exponential two in worst case on the set of inputs that include this M with general string $w$?
    $endgroup$
    – Oren
    Apr 1 at 16:36






    $begingroup$
    So for input M that runs in exponential time in worst case, does this mean that the (asymptotically) fastest algorithm (or family of algorithms) to improve A must be exponential two in worst case on the set of inputs that include this M with general string $w$?
    $endgroup$
    – Oren
    Apr 1 at 16:36














    $begingroup$
    @Oren Exactly, yes. In particular, this is how we know that $mathrm{EXP}neqmathrm{P}$: it tells us there can be no polynomial-time algorithm for an $mathrm{EXP}$-complete problem.
    $endgroup$
    – David Richerby
    Apr 1 at 17:00






    $begingroup$
    @Oren Exactly, yes. In particular, this is how we know that $mathrm{EXP}neqmathrm{P}$: it tells us there can be no polynomial-time algorithm for an $mathrm{EXP}$-complete problem.
    $endgroup$
    – David Richerby
    Apr 1 at 17:00














    $begingroup$
    hey can you extend of the use of the time hierarchy theorem? It is still unclear to me as to why the specific algorithm can be reduced only by log factor as the theorem states only that exists such algorithms, though it doesn't follow immediately that A is one of them (those who can be improved only by a log factor)
    $endgroup$
    – Oren
    Apr 2 at 12:53




    $begingroup$
    hey can you extend of the use of the time hierarchy theorem? It is still unclear to me as to why the specific algorithm can be reduced only by log factor as the theorem states only that exists such algorithms, though it doesn't follow immediately that A is one of them (those who can be improved only by a log factor)
    $endgroup$
    – Oren
    Apr 2 at 12:53












    $begingroup$
    As I recall, we believe that the time hierarchy theorem ought to be strict, in the sense that there are things you can do in time $f(n)$ that can't be done in time $o(f(n))$ (analogous to the space hierarchy theorem), but that $o(f(n)/log n)$ is the best anyone's managed to prove. You can't keep shaving off log-factors since, if you managed to do it even twice, you'd be at roughly $f(n)/(log n)^2$, which is less than $f(n)/log n$.
    $endgroup$
    – David Richerby
    Apr 2 at 13:11




    $begingroup$
    As I recall, we believe that the time hierarchy theorem ought to be strict, in the sense that there are things you can do in time $f(n)$ that can't be done in time $o(f(n))$ (analogous to the space hierarchy theorem), but that $o(f(n)/log n)$ is the best anyone's managed to prove. You can't keep shaving off log-factors since, if you managed to do it even twice, you'd be at roughly $f(n)/(log n)^2$, which is less than $f(n)/log n$.
    $endgroup$
    – David Richerby
    Apr 2 at 13:11












    $begingroup$
    Ok. still why is $A$ one of those algorithms (those that you can do in time $f(n)$ but not in time $o(f(n)/logn)$)?
    $endgroup$
    – Oren
    Apr 2 at 13:25






    $begingroup$
    Ok. still why is $A$ one of those algorithms (those that you can do in time $f(n)$ but not in time $o(f(n)/logn)$)?
    $endgroup$
    – Oren
    Apr 2 at 13:25













    4












    $begingroup$

    Dkaeae brought up a very useful trick in the comments: the Linear Speedup Theorem. Effectively, it says:




    For any positive $k$, there's a mechanical transformation you can do to any Turing machine, which makes it run $k$ times faster.




    (There's a bit more to it than that, but that's not really relevant here. Wikipedia has more details.)



    So I propose the following family of algorithms (with hyperparameter $k$):



    def decide(M, w):
    use the Linear Speedup Theorem to turn M into M', which is k times faster
    run M' on w and return the result


    You can make this as fast as you want by increasing $k$: there's theoretically no limit on this. No matter how fast it runs, you can always make it faster by just making $k$ bigger.



    This is why time complexity is always given in asymptotic terms (big-O and all that): constant factors are extremely easy to add and remove, so they don't really tell us anything useful about the algorithm itself. If you have an algorithm that runs in $n^5+C$ time, I can turn that into $frac{1}{1,000,000} n^5+C$, but it'll still end up slower than $1,000,000n+C$ for large enough $n$.



    P.S. You might be wondering, "what's the catch?" The answer is, the Linear Speedup construction makes a machine with more states and a more convoluted instruction set. But that doesn't matter when you're talking about time complexity.






    share|cite|improve this answer











    $endgroup$


















      4












      $begingroup$

      Dkaeae brought up a very useful trick in the comments: the Linear Speedup Theorem. Effectively, it says:




      For any positive $k$, there's a mechanical transformation you can do to any Turing machine, which makes it run $k$ times faster.




      (There's a bit more to it than that, but that's not really relevant here. Wikipedia has more details.)



      So I propose the following family of algorithms (with hyperparameter $k$):



      def decide(M, w):
      use the Linear Speedup Theorem to turn M into M', which is k times faster
      run M' on w and return the result


      You can make this as fast as you want by increasing $k$: there's theoretically no limit on this. No matter how fast it runs, you can always make it faster by just making $k$ bigger.



      This is why time complexity is always given in asymptotic terms (big-O and all that): constant factors are extremely easy to add and remove, so they don't really tell us anything useful about the algorithm itself. If you have an algorithm that runs in $n^5+C$ time, I can turn that into $frac{1}{1,000,000} n^5+C$, but it'll still end up slower than $1,000,000n+C$ for large enough $n$.



      P.S. You might be wondering, "what's the catch?" The answer is, the Linear Speedup construction makes a machine with more states and a more convoluted instruction set. But that doesn't matter when you're talking about time complexity.






      share|cite|improve this answer











      $endgroup$
















        4












        4








        4





        $begingroup$

        Dkaeae brought up a very useful trick in the comments: the Linear Speedup Theorem. Effectively, it says:




        For any positive $k$, there's a mechanical transformation you can do to any Turing machine, which makes it run $k$ times faster.




        (There's a bit more to it than that, but that's not really relevant here. Wikipedia has more details.)



        So I propose the following family of algorithms (with hyperparameter $k$):



        def decide(M, w):
        use the Linear Speedup Theorem to turn M into M', which is k times faster
        run M' on w and return the result


        You can make this as fast as you want by increasing $k$: there's theoretically no limit on this. No matter how fast it runs, you can always make it faster by just making $k$ bigger.



        This is why time complexity is always given in asymptotic terms (big-O and all that): constant factors are extremely easy to add and remove, so they don't really tell us anything useful about the algorithm itself. If you have an algorithm that runs in $n^5+C$ time, I can turn that into $frac{1}{1,000,000} n^5+C$, but it'll still end up slower than $1,000,000n+C$ for large enough $n$.



        P.S. You might be wondering, "what's the catch?" The answer is, the Linear Speedup construction makes a machine with more states and a more convoluted instruction set. But that doesn't matter when you're talking about time complexity.






        share|cite|improve this answer











        $endgroup$



        Dkaeae brought up a very useful trick in the comments: the Linear Speedup Theorem. Effectively, it says:




        For any positive $k$, there's a mechanical transformation you can do to any Turing machine, which makes it run $k$ times faster.




        (There's a bit more to it than that, but that's not really relevant here. Wikipedia has more details.)



        So I propose the following family of algorithms (with hyperparameter $k$):



        def decide(M, w):
        use the Linear Speedup Theorem to turn M into M', which is k times faster
        run M' on w and return the result


        You can make this as fast as you want by increasing $k$: there's theoretically no limit on this. No matter how fast it runs, you can always make it faster by just making $k$ bigger.



        This is why time complexity is always given in asymptotic terms (big-O and all that): constant factors are extremely easy to add and remove, so they don't really tell us anything useful about the algorithm itself. If you have an algorithm that runs in $n^5+C$ time, I can turn that into $frac{1}{1,000,000} n^5+C$, but it'll still end up slower than $1,000,000n+C$ for large enough $n$.



        P.S. You might be wondering, "what's the catch?" The answer is, the Linear Speedup construction makes a machine with more states and a more convoluted instruction set. But that doesn't matter when you're talking about time complexity.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Apr 2 at 15:06

























        answered Apr 1 at 14:53









        DraconisDraconis

        5,762921




        5,762921























            0












            $begingroup$

            Of course there is.



            Consider, for instance, a TM $T$ which reads its entire input (of length $n$) $10^{100n}$ times and then accepts. Then the TM $T'$ which instantly accepts any input is at least $10^{100n}$ times faster than any (step for step) simulation of $T$. (You may replace $10^{100n}$ with your favorite largest computable number.)



            Hence, the following algorithm $A'$ would do it:




            1. Check whether $langle M rangle = langle T rangle$. If so, then set $langle M rangle$ to $langle T' rangle$; otherwise, leave $langle M rangle$ intact.

            2. Do what $A$ does.


            It is easy to see $A'$ will now be $10^{100n}$ faster than $A$ if given $langle T, w rangle$ as input. This qualifies as a (strict) asymptotic improvement since there are infinitely many values for $w$. $A'$ only needs $O(n)$ extra steps (in step 1) before doing what $A$ does, but $A$ takes $Omega(n)$ time anyway (because it necessarily reads its entire input at least once), so $A'$ is asymptotically just as fast as $A$ on all other inputs.



            The above construction provides an improvement for one particular TM (i.e., $T$) but can be extended to be faster than $A$ for infinitely many TMs. This can be done, for instance, by defining a series $T_k$ of TMs with a parameter $k$ such that $T_k$ reads its entire input $k^{100n}$ times and accepts. The description of $T_k$ can be made such that it is recognizable by $A'$ in $O(n)$ time, as above (imagine, for instance, $langle T_k rangle$ being the exact same piece of code where $k$ is declared as a constant).






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              I get the answer from the comment above, really cool by the way, i didn't know that.. but this example is for a specific TM. I mean if $T'$ gets input of, for example, a TM that reject all inputs immediately, it won't work. or am I getting this wrong?
              $endgroup$
              – Oren
              Apr 1 at 14:13










            • $begingroup$
              If you are trying to prove a statement of the form $forall x: A(x)$ wrong, then you only need to provide an $x$ which falsifies $A(x)$. (Here, $x$ is $T$ and $A(x)$ is the statement that simulating $T$ step for step is the fastest possible algorithm.)
              $endgroup$
              – dkaeae
              Apr 1 at 14:16












            • $begingroup$
              I get the logic, but this one example is what I can't see working. can you clarify the roles of $T$ and $T'$ in the algorithm based on $M$'s role?
              $endgroup$
              – Oren
              Apr 1 at 14:20










            • $begingroup$
              $M = T$, whereas computing $T'$ (or just answering "yes") is a faster algorithm than directly simulating $T$.
              $endgroup$
              – dkaeae
              Apr 1 at 14:23






            • 1




              $begingroup$
              Though in this example it doesn't check whether $M$ accepts $w$ so it won't always be correct. I mean $T'$ will answer correctly for $T$ and will do it faster than $M$ but for other inputs that are different from $T'$, for example with input of $T''$ that rejects all inputs immediately, it will return a wrong answer, so it is an example for a fast algorithm though is one that isn't always correct
              $endgroup$
              – Oren
              Apr 1 at 14:32


















            0












            $begingroup$

            Of course there is.



            Consider, for instance, a TM $T$ which reads its entire input (of length $n$) $10^{100n}$ times and then accepts. Then the TM $T'$ which instantly accepts any input is at least $10^{100n}$ times faster than any (step for step) simulation of $T$. (You may replace $10^{100n}$ with your favorite largest computable number.)



            Hence, the following algorithm $A'$ would do it:




            1. Check whether $langle M rangle = langle T rangle$. If so, then set $langle M rangle$ to $langle T' rangle$; otherwise, leave $langle M rangle$ intact.

            2. Do what $A$ does.


            It is easy to see $A'$ will now be $10^{100n}$ faster than $A$ if given $langle T, w rangle$ as input. This qualifies as a (strict) asymptotic improvement since there are infinitely many values for $w$. $A'$ only needs $O(n)$ extra steps (in step 1) before doing what $A$ does, but $A$ takes $Omega(n)$ time anyway (because it necessarily reads its entire input at least once), so $A'$ is asymptotically just as fast as $A$ on all other inputs.



            The above construction provides an improvement for one particular TM (i.e., $T$) but can be extended to be faster than $A$ for infinitely many TMs. This can be done, for instance, by defining a series $T_k$ of TMs with a parameter $k$ such that $T_k$ reads its entire input $k^{100n}$ times and accepts. The description of $T_k$ can be made such that it is recognizable by $A'$ in $O(n)$ time, as above (imagine, for instance, $langle T_k rangle$ being the exact same piece of code where $k$ is declared as a constant).






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              I get the answer from the comment above, really cool by the way, i didn't know that.. but this example is for a specific TM. I mean if $T'$ gets input of, for example, a TM that reject all inputs immediately, it won't work. or am I getting this wrong?
              $endgroup$
              – Oren
              Apr 1 at 14:13










            • $begingroup$
              If you are trying to prove a statement of the form $forall x: A(x)$ wrong, then you only need to provide an $x$ which falsifies $A(x)$. (Here, $x$ is $T$ and $A(x)$ is the statement that simulating $T$ step for step is the fastest possible algorithm.)
              $endgroup$
              – dkaeae
              Apr 1 at 14:16












            • $begingroup$
              I get the logic, but this one example is what I can't see working. can you clarify the roles of $T$ and $T'$ in the algorithm based on $M$'s role?
              $endgroup$
              – Oren
              Apr 1 at 14:20










            • $begingroup$
              $M = T$, whereas computing $T'$ (or just answering "yes") is a faster algorithm than directly simulating $T$.
              $endgroup$
              – dkaeae
              Apr 1 at 14:23






            • 1




              $begingroup$
              Though in this example it doesn't check whether $M$ accepts $w$ so it won't always be correct. I mean $T'$ will answer correctly for $T$ and will do it faster than $M$ but for other inputs that are different from $T'$, for example with input of $T''$ that rejects all inputs immediately, it will return a wrong answer, so it is an example for a fast algorithm though is one that isn't always correct
              $endgroup$
              – Oren
              Apr 1 at 14:32
















            0












            0








            0





            $begingroup$

            Of course there is.



            Consider, for instance, a TM $T$ which reads its entire input (of length $n$) $10^{100n}$ times and then accepts. Then the TM $T'$ which instantly accepts any input is at least $10^{100n}$ times faster than any (step for step) simulation of $T$. (You may replace $10^{100n}$ with your favorite largest computable number.)



            Hence, the following algorithm $A'$ would do it:




            1. Check whether $langle M rangle = langle T rangle$. If so, then set $langle M rangle$ to $langle T' rangle$; otherwise, leave $langle M rangle$ intact.

            2. Do what $A$ does.


            It is easy to see $A'$ will now be $10^{100n}$ faster than $A$ if given $langle T, w rangle$ as input. This qualifies as a (strict) asymptotic improvement since there are infinitely many values for $w$. $A'$ only needs $O(n)$ extra steps (in step 1) before doing what $A$ does, but $A$ takes $Omega(n)$ time anyway (because it necessarily reads its entire input at least once), so $A'$ is asymptotically just as fast as $A$ on all other inputs.



            The above construction provides an improvement for one particular TM (i.e., $T$) but can be extended to be faster than $A$ for infinitely many TMs. This can be done, for instance, by defining a series $T_k$ of TMs with a parameter $k$ such that $T_k$ reads its entire input $k^{100n}$ times and accepts. The description of $T_k$ can be made such that it is recognizable by $A'$ in $O(n)$ time, as above (imagine, for instance, $langle T_k rangle$ being the exact same piece of code where $k$ is declared as a constant).






            share|cite|improve this answer











            $endgroup$



            Of course there is.



            Consider, for instance, a TM $T$ which reads its entire input (of length $n$) $10^{100n}$ times and then accepts. Then the TM $T'$ which instantly accepts any input is at least $10^{100n}$ times faster than any (step for step) simulation of $T$. (You may replace $10^{100n}$ with your favorite largest computable number.)



            Hence, the following algorithm $A'$ would do it:




            1. Check whether $langle M rangle = langle T rangle$. If so, then set $langle M rangle$ to $langle T' rangle$; otherwise, leave $langle M rangle$ intact.

            2. Do what $A$ does.


            It is easy to see $A'$ will now be $10^{100n}$ faster than $A$ if given $langle T, w rangle$ as input. This qualifies as a (strict) asymptotic improvement since there are infinitely many values for $w$. $A'$ only needs $O(n)$ extra steps (in step 1) before doing what $A$ does, but $A$ takes $Omega(n)$ time anyway (because it necessarily reads its entire input at least once), so $A'$ is asymptotically just as fast as $A$ on all other inputs.



            The above construction provides an improvement for one particular TM (i.e., $T$) but can be extended to be faster than $A$ for infinitely many TMs. This can be done, for instance, by defining a series $T_k$ of TMs with a parameter $k$ such that $T_k$ reads its entire input $k^{100n}$ times and accepts. The description of $T_k$ can be made such that it is recognizable by $A'$ in $O(n)$ time, as above (imagine, for instance, $langle T_k rangle$ being the exact same piece of code where $k$ is declared as a constant).







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Apr 2 at 9:02

























            answered Apr 1 at 14:05









            dkaeaedkaeae

            2,2321922




            2,2321922












            • $begingroup$
              I get the answer from the comment above, really cool by the way, i didn't know that.. but this example is for a specific TM. I mean if $T'$ gets input of, for example, a TM that reject all inputs immediately, it won't work. or am I getting this wrong?
              $endgroup$
              – Oren
              Apr 1 at 14:13










            • $begingroup$
              If you are trying to prove a statement of the form $forall x: A(x)$ wrong, then you only need to provide an $x$ which falsifies $A(x)$. (Here, $x$ is $T$ and $A(x)$ is the statement that simulating $T$ step for step is the fastest possible algorithm.)
              $endgroup$
              – dkaeae
              Apr 1 at 14:16












            • $begingroup$
              I get the logic, but this one example is what I can't see working. can you clarify the roles of $T$ and $T'$ in the algorithm based on $M$'s role?
              $endgroup$
              – Oren
              Apr 1 at 14:20










            • $begingroup$
              $M = T$, whereas computing $T'$ (or just answering "yes") is a faster algorithm than directly simulating $T$.
              $endgroup$
              – dkaeae
              Apr 1 at 14:23






            • 1




              $begingroup$
              Though in this example it doesn't check whether $M$ accepts $w$ so it won't always be correct. I mean $T'$ will answer correctly for $T$ and will do it faster than $M$ but for other inputs that are different from $T'$, for example with input of $T''$ that rejects all inputs immediately, it will return a wrong answer, so it is an example for a fast algorithm though is one that isn't always correct
              $endgroup$
              – Oren
              Apr 1 at 14:32




















            • $begingroup$
              I get the answer from the comment above, really cool by the way, i didn't know that.. but this example is for a specific TM. I mean if $T'$ gets input of, for example, a TM that reject all inputs immediately, it won't work. or am I getting this wrong?
              $endgroup$
              – Oren
              Apr 1 at 14:13










            • $begingroup$
              If you are trying to prove a statement of the form $forall x: A(x)$ wrong, then you only need to provide an $x$ which falsifies $A(x)$. (Here, $x$ is $T$ and $A(x)$ is the statement that simulating $T$ step for step is the fastest possible algorithm.)
              $endgroup$
              – dkaeae
              Apr 1 at 14:16












            • $begingroup$
              I get the logic, but this one example is what I can't see working. can you clarify the roles of $T$ and $T'$ in the algorithm based on $M$'s role?
              $endgroup$
              – Oren
              Apr 1 at 14:20










            • $begingroup$
              $M = T$, whereas computing $T'$ (or just answering "yes") is a faster algorithm than directly simulating $T$.
              $endgroup$
              – dkaeae
              Apr 1 at 14:23






            • 1




              $begingroup$
              Though in this example it doesn't check whether $M$ accepts $w$ so it won't always be correct. I mean $T'$ will answer correctly for $T$ and will do it faster than $M$ but for other inputs that are different from $T'$, for example with input of $T''$ that rejects all inputs immediately, it will return a wrong answer, so it is an example for a fast algorithm though is one that isn't always correct
              $endgroup$
              – Oren
              Apr 1 at 14:32


















            $begingroup$
            I get the answer from the comment above, really cool by the way, i didn't know that.. but this example is for a specific TM. I mean if $T'$ gets input of, for example, a TM that reject all inputs immediately, it won't work. or am I getting this wrong?
            $endgroup$
            – Oren
            Apr 1 at 14:13




            $begingroup$
            I get the answer from the comment above, really cool by the way, i didn't know that.. but this example is for a specific TM. I mean if $T'$ gets input of, for example, a TM that reject all inputs immediately, it won't work. or am I getting this wrong?
            $endgroup$
            – Oren
            Apr 1 at 14:13












            $begingroup$
            If you are trying to prove a statement of the form $forall x: A(x)$ wrong, then you only need to provide an $x$ which falsifies $A(x)$. (Here, $x$ is $T$ and $A(x)$ is the statement that simulating $T$ step for step is the fastest possible algorithm.)
            $endgroup$
            – dkaeae
            Apr 1 at 14:16






            $begingroup$
            If you are trying to prove a statement of the form $forall x: A(x)$ wrong, then you only need to provide an $x$ which falsifies $A(x)$. (Here, $x$ is $T$ and $A(x)$ is the statement that simulating $T$ step for step is the fastest possible algorithm.)
            $endgroup$
            – dkaeae
            Apr 1 at 14:16














            $begingroup$
            I get the logic, but this one example is what I can't see working. can you clarify the roles of $T$ and $T'$ in the algorithm based on $M$'s role?
            $endgroup$
            – Oren
            Apr 1 at 14:20




            $begingroup$
            I get the logic, but this one example is what I can't see working. can you clarify the roles of $T$ and $T'$ in the algorithm based on $M$'s role?
            $endgroup$
            – Oren
            Apr 1 at 14:20












            $begingroup$
            $M = T$, whereas computing $T'$ (or just answering "yes") is a faster algorithm than directly simulating $T$.
            $endgroup$
            – dkaeae
            Apr 1 at 14:23




            $begingroup$
            $M = T$, whereas computing $T'$ (or just answering "yes") is a faster algorithm than directly simulating $T$.
            $endgroup$
            – dkaeae
            Apr 1 at 14:23




            1




            1




            $begingroup$
            Though in this example it doesn't check whether $M$ accepts $w$ so it won't always be correct. I mean $T'$ will answer correctly for $T$ and will do it faster than $M$ but for other inputs that are different from $T'$, for example with input of $T''$ that rejects all inputs immediately, it will return a wrong answer, so it is an example for a fast algorithm though is one that isn't always correct
            $endgroup$
            – Oren
            Apr 1 at 14:32






            $begingroup$
            Though in this example it doesn't check whether $M$ accepts $w$ so it won't always be correct. I mean $T'$ will answer correctly for $T$ and will do it faster than $M$ but for other inputs that are different from $T'$, for example with input of $T''$ that rejects all inputs immediately, it will return a wrong answer, so it is an example for a fast algorithm though is one that isn't always correct
            $endgroup$
            – Oren
            Apr 1 at 14:32




















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Computer Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106329%2ffastest-algorithm-to-decide-whether-a-always-halting-tm-accepts-a-general-stri%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

            How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...