When (not how or why) to calculate Big O of an algorithm





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







10















I was asked this question in an interview recently and was curious as to what others thought.



"When should you calculate Big O?"



Most sites/books talk about HOW to calc Big O but not actually when you should do it. I'm an entry level developer and I have minimal experience so I'm not sure if I'm thinking on the right track. My thinking is you would have a target Big O to work towards, develop the algorithm then calculate the Big O. Then try to refactor the algorithm for efficiency.



My question then becomes is this what actually happens in industry or am I far off?










share|improve this question


















  • 3





    You should understand the performance of every single method you write. If it takes a variable-sized input, and the complexity isn't obvious to you, then you should calculate it as part of this understanding.

    – Matt Timmermans
    Apr 3 at 16:50








  • 1





    I think when and why are closely linked.

    – Bergi
    Apr 4 at 8:09


















10















I was asked this question in an interview recently and was curious as to what others thought.



"When should you calculate Big O?"



Most sites/books talk about HOW to calc Big O but not actually when you should do it. I'm an entry level developer and I have minimal experience so I'm not sure if I'm thinking on the right track. My thinking is you would have a target Big O to work towards, develop the algorithm then calculate the Big O. Then try to refactor the algorithm for efficiency.



My question then becomes is this what actually happens in industry or am I far off?










share|improve this question


















  • 3





    You should understand the performance of every single method you write. If it takes a variable-sized input, and the complexity isn't obvious to you, then you should calculate it as part of this understanding.

    – Matt Timmermans
    Apr 3 at 16:50








  • 1





    I think when and why are closely linked.

    – Bergi
    Apr 4 at 8:09














10












10








10


2






I was asked this question in an interview recently and was curious as to what others thought.



"When should you calculate Big O?"



Most sites/books talk about HOW to calc Big O but not actually when you should do it. I'm an entry level developer and I have minimal experience so I'm not sure if I'm thinking on the right track. My thinking is you would have a target Big O to work towards, develop the algorithm then calculate the Big O. Then try to refactor the algorithm for efficiency.



My question then becomes is this what actually happens in industry or am I far off?










share|improve this question














I was asked this question in an interview recently and was curious as to what others thought.



"When should you calculate Big O?"



Most sites/books talk about HOW to calc Big O but not actually when you should do it. I'm an entry level developer and I have minimal experience so I'm not sure if I'm thinking on the right track. My thinking is you would have a target Big O to work towards, develop the algorithm then calculate the Big O. Then try to refactor the algorithm for efficiency.



My question then becomes is this what actually happens in industry or am I far off?







algorithm big-o






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 3 at 16:30









Brian PhairBrian Phair

934




934








  • 3





    You should understand the performance of every single method you write. If it takes a variable-sized input, and the complexity isn't obvious to you, then you should calculate it as part of this understanding.

    – Matt Timmermans
    Apr 3 at 16:50








  • 1





    I think when and why are closely linked.

    – Bergi
    Apr 4 at 8:09














  • 3





    You should understand the performance of every single method you write. If it takes a variable-sized input, and the complexity isn't obvious to you, then you should calculate it as part of this understanding.

    – Matt Timmermans
    Apr 3 at 16:50








  • 1





    I think when and why are closely linked.

    – Bergi
    Apr 4 at 8:09








3




3





You should understand the performance of every single method you write. If it takes a variable-sized input, and the complexity isn't obvious to you, then you should calculate it as part of this understanding.

– Matt Timmermans
Apr 3 at 16:50







You should understand the performance of every single method you write. If it takes a variable-sized input, and the complexity isn't obvious to you, then you should calculate it as part of this understanding.

– Matt Timmermans
Apr 3 at 16:50






1




1





I think when and why are closely linked.

– Bergi
Apr 4 at 8:09





I think when and why are closely linked.

– Bergi
Apr 4 at 8:09












5 Answers
5






active

oldest

votes


















11















"When should you calculate Big O?"




When you care about the Time Complexity of the algorithm.



When do I care?



When you need to make your algorithm to be able to scale, meaning that it's expected to have big datasets as input to your algorithm (e.g. number of points and number of dimensions in a nearest neighbor algorithm).



Most notably, when you want to compare algorithms!



You are asked to do a task, for which several algorithms can be applied to. Which one do you choose? You compare the Space, Time and development/maintenance complexities of them, and choose the one that best fits your needs.






share|improve this answer





















  • 2





    Intersting. I didn't think about incorporating scalability into the answer. I'll make sure to consider that going forward. Thanks!

    – Brian Phair
    Apr 3 at 16:37






  • 1





    You are welcome @BrianPhair, check my edit about using Time complexity as a powerful semaphore to compare algorithms. Good luck with your interview, good question.

    – gsamaras
    Apr 3 at 16:43






  • 2





    Add cost, development time and maintenance time into the equation so people know that scaling up has costs; otherwise everyone thinks they are Google.

    – Paddy3118
    Apr 4 at 6:28











  • @Paddy3118 agreed, answer updated. I had used implementation, but development seems nicer.

    – gsamaras
    Apr 4 at 15:09








  • 1





    Ah, sorry about that, I upvoted but didn't realize about the accepting of answers. Thanks again!

    – Brian Phair
    Apr 6 at 21:38



















0














It represents the upper bound.
Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.



It gives the worst time complexity or maximum time require to execute the algorithm






share|improve this answer



















  • 5





    Strictly speaking, this is not correct. Big-O denotes an asymptotic upper bound, but it can be applied to any function, not just worst-case performance. In fact, it's not unusual to apply it to average/expected-case performance. (That said, it's true that people often use big-O to mean the big-Θ of the worst case.)

    – ruakh
    Apr 3 at 16:48











  • ya...it's right

    – RS Patel
    Apr 3 at 16:53



















0














Big O or asymptotic notations allow us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases.



So whenever you need to analyse your algorithm's behavior with respect to growth of the input, you will calculate this. Let me give you an example -



Suppose you need to query over 1 billion data. So you wrote a linear search algorithm. So is it okay? How would you know? You will calculate Big-o. It's O(n) for linear search. So in worst case it would execute 1 billion instruction to query. If your machine executes 10^7 instruction per second(let's assume), then it would take 100 seconds. So you see - you are getting an runtime analysis in terms of growth of the input.






share|improve this answer































    0














    When we are solving an algorithmic problem we want to test the algorithm irrespective of hardware where we are running the algorithm. So we have certain asymptotic notation using which we can define the time and space complexities of our algorithm.




    1. Theta-Notation: Used for defining average case complexity as it bounds the function from top and bottom


    2. Omega-Notation: Bounds the function from below. It is used for best-time complexity


    3. Big-O Notation: This is important as it tells about worst-case complexity and it bounds the function from top.



    Now I think the answer to Why BIG-O is calculated is that using it we can get a fair idea that how bad our algorithm can perform as the size of input increases. And If we can optimize our algorithm for worst case then average and best case will take care for themselves.






    share|improve this answer































      0














      I assume that you want to ask "when should I calculate time complexity?", just to avoid technicalities about Theta, Omega and Big-O.



      Right attitude is to guess it almost always. Notable exceptions include piece of code you want to run just once and you are sure that it will never receive bigger input.



      The emphasis on guess is because it does not matter that much whether complexity is constant or logarithmic. There is also a little difference between O(n^2) and O(n^2 log n) or between O(n^3) and O(n^4). But there is a big difference between constant and linear.



      The main goal of the guess, is the answer to the question: "What happens if I get 10 times larger input?". If complexity is constant, nothing happens (in theory at least). If complexity is linear, you will get 10 times larger running time. If complexity is quadratic or bigger, you start to have problems.
      Secondary goal of the guess is the answer to question: 'What is the biggest input I can handle?". Again quadratic will get you up to 10000 at most. O(2^n) ends around 25.



      This might sound scary and time consuming, but in practice, getting time complexity of the code is rather trivial, since most of the things are either constant, logarithmic or linear.






      share|improve this answer
























        Your Answer






        StackExchange.ifUsing("editor", function () {
        StackExchange.using("externalEditor", function () {
        StackExchange.using("snippets", function () {
        StackExchange.snippets.init();
        });
        });
        }, "code-snippets");

        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "1"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55500051%2fwhen-not-how-or-why-to-calculate-big-o-of-an-algorithm%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        5 Answers
        5






        active

        oldest

        votes








        5 Answers
        5






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        11















        "When should you calculate Big O?"




        When you care about the Time Complexity of the algorithm.



        When do I care?



        When you need to make your algorithm to be able to scale, meaning that it's expected to have big datasets as input to your algorithm (e.g. number of points and number of dimensions in a nearest neighbor algorithm).



        Most notably, when you want to compare algorithms!



        You are asked to do a task, for which several algorithms can be applied to. Which one do you choose? You compare the Space, Time and development/maintenance complexities of them, and choose the one that best fits your needs.






        share|improve this answer





















        • 2





          Intersting. I didn't think about incorporating scalability into the answer. I'll make sure to consider that going forward. Thanks!

          – Brian Phair
          Apr 3 at 16:37






        • 1





          You are welcome @BrianPhair, check my edit about using Time complexity as a powerful semaphore to compare algorithms. Good luck with your interview, good question.

          – gsamaras
          Apr 3 at 16:43






        • 2





          Add cost, development time and maintenance time into the equation so people know that scaling up has costs; otherwise everyone thinks they are Google.

          – Paddy3118
          Apr 4 at 6:28











        • @Paddy3118 agreed, answer updated. I had used implementation, but development seems nicer.

          – gsamaras
          Apr 4 at 15:09








        • 1





          Ah, sorry about that, I upvoted but didn't realize about the accepting of answers. Thanks again!

          – Brian Phair
          Apr 6 at 21:38
















        11















        "When should you calculate Big O?"




        When you care about the Time Complexity of the algorithm.



        When do I care?



        When you need to make your algorithm to be able to scale, meaning that it's expected to have big datasets as input to your algorithm (e.g. number of points and number of dimensions in a nearest neighbor algorithm).



        Most notably, when you want to compare algorithms!



        You are asked to do a task, for which several algorithms can be applied to. Which one do you choose? You compare the Space, Time and development/maintenance complexities of them, and choose the one that best fits your needs.






        share|improve this answer





















        • 2





          Intersting. I didn't think about incorporating scalability into the answer. I'll make sure to consider that going forward. Thanks!

          – Brian Phair
          Apr 3 at 16:37






        • 1





          You are welcome @BrianPhair, check my edit about using Time complexity as a powerful semaphore to compare algorithms. Good luck with your interview, good question.

          – gsamaras
          Apr 3 at 16:43






        • 2





          Add cost, development time and maintenance time into the equation so people know that scaling up has costs; otherwise everyone thinks they are Google.

          – Paddy3118
          Apr 4 at 6:28











        • @Paddy3118 agreed, answer updated. I had used implementation, but development seems nicer.

          – gsamaras
          Apr 4 at 15:09








        • 1





          Ah, sorry about that, I upvoted but didn't realize about the accepting of answers. Thanks again!

          – Brian Phair
          Apr 6 at 21:38














        11












        11








        11








        "When should you calculate Big O?"




        When you care about the Time Complexity of the algorithm.



        When do I care?



        When you need to make your algorithm to be able to scale, meaning that it's expected to have big datasets as input to your algorithm (e.g. number of points and number of dimensions in a nearest neighbor algorithm).



        Most notably, when you want to compare algorithms!



        You are asked to do a task, for which several algorithms can be applied to. Which one do you choose? You compare the Space, Time and development/maintenance complexities of them, and choose the one that best fits your needs.






        share|improve this answer
















        "When should you calculate Big O?"




        When you care about the Time Complexity of the algorithm.



        When do I care?



        When you need to make your algorithm to be able to scale, meaning that it's expected to have big datasets as input to your algorithm (e.g. number of points and number of dimensions in a nearest neighbor algorithm).



        Most notably, when you want to compare algorithms!



        You are asked to do a task, for which several algorithms can be applied to. Which one do you choose? You compare the Space, Time and development/maintenance complexities of them, and choose the one that best fits your needs.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Apr 4 at 15:09

























        answered Apr 3 at 16:35









        gsamarasgsamaras

        52.6k25109194




        52.6k25109194








        • 2





          Intersting. I didn't think about incorporating scalability into the answer. I'll make sure to consider that going forward. Thanks!

          – Brian Phair
          Apr 3 at 16:37






        • 1





          You are welcome @BrianPhair, check my edit about using Time complexity as a powerful semaphore to compare algorithms. Good luck with your interview, good question.

          – gsamaras
          Apr 3 at 16:43






        • 2





          Add cost, development time and maintenance time into the equation so people know that scaling up has costs; otherwise everyone thinks they are Google.

          – Paddy3118
          Apr 4 at 6:28











        • @Paddy3118 agreed, answer updated. I had used implementation, but development seems nicer.

          – gsamaras
          Apr 4 at 15:09








        • 1





          Ah, sorry about that, I upvoted but didn't realize about the accepting of answers. Thanks again!

          – Brian Phair
          Apr 6 at 21:38














        • 2





          Intersting. I didn't think about incorporating scalability into the answer. I'll make sure to consider that going forward. Thanks!

          – Brian Phair
          Apr 3 at 16:37






        • 1





          You are welcome @BrianPhair, check my edit about using Time complexity as a powerful semaphore to compare algorithms. Good luck with your interview, good question.

          – gsamaras
          Apr 3 at 16:43






        • 2





          Add cost, development time and maintenance time into the equation so people know that scaling up has costs; otherwise everyone thinks they are Google.

          – Paddy3118
          Apr 4 at 6:28











        • @Paddy3118 agreed, answer updated. I had used implementation, but development seems nicer.

          – gsamaras
          Apr 4 at 15:09








        • 1





          Ah, sorry about that, I upvoted but didn't realize about the accepting of answers. Thanks again!

          – Brian Phair
          Apr 6 at 21:38








        2




        2





        Intersting. I didn't think about incorporating scalability into the answer. I'll make sure to consider that going forward. Thanks!

        – Brian Phair
        Apr 3 at 16:37





        Intersting. I didn't think about incorporating scalability into the answer. I'll make sure to consider that going forward. Thanks!

        – Brian Phair
        Apr 3 at 16:37




        1




        1





        You are welcome @BrianPhair, check my edit about using Time complexity as a powerful semaphore to compare algorithms. Good luck with your interview, good question.

        – gsamaras
        Apr 3 at 16:43





        You are welcome @BrianPhair, check my edit about using Time complexity as a powerful semaphore to compare algorithms. Good luck with your interview, good question.

        – gsamaras
        Apr 3 at 16:43




        2




        2





        Add cost, development time and maintenance time into the equation so people know that scaling up has costs; otherwise everyone thinks they are Google.

        – Paddy3118
        Apr 4 at 6:28





        Add cost, development time and maintenance time into the equation so people know that scaling up has costs; otherwise everyone thinks they are Google.

        – Paddy3118
        Apr 4 at 6:28













        @Paddy3118 agreed, answer updated. I had used implementation, but development seems nicer.

        – gsamaras
        Apr 4 at 15:09







        @Paddy3118 agreed, answer updated. I had used implementation, but development seems nicer.

        – gsamaras
        Apr 4 at 15:09






        1




        1





        Ah, sorry about that, I upvoted but didn't realize about the accepting of answers. Thanks again!

        – Brian Phair
        Apr 6 at 21:38





        Ah, sorry about that, I upvoted but didn't realize about the accepting of answers. Thanks again!

        – Brian Phair
        Apr 6 at 21:38













        0














        It represents the upper bound.
        Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.



        It gives the worst time complexity or maximum time require to execute the algorithm






        share|improve this answer



















        • 5





          Strictly speaking, this is not correct. Big-O denotes an asymptotic upper bound, but it can be applied to any function, not just worst-case performance. In fact, it's not unusual to apply it to average/expected-case performance. (That said, it's true that people often use big-O to mean the big-Θ of the worst case.)

          – ruakh
          Apr 3 at 16:48











        • ya...it's right

          – RS Patel
          Apr 3 at 16:53
















        0














        It represents the upper bound.
        Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.



        It gives the worst time complexity or maximum time require to execute the algorithm






        share|improve this answer



















        • 5





          Strictly speaking, this is not correct. Big-O denotes an asymptotic upper bound, but it can be applied to any function, not just worst-case performance. In fact, it's not unusual to apply it to average/expected-case performance. (That said, it's true that people often use big-O to mean the big-Θ of the worst case.)

          – ruakh
          Apr 3 at 16:48











        • ya...it's right

          – RS Patel
          Apr 3 at 16:53














        0












        0








        0







        It represents the upper bound.
        Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.



        It gives the worst time complexity or maximum time require to execute the algorithm






        share|improve this answer













        It represents the upper bound.
        Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.



        It gives the worst time complexity or maximum time require to execute the algorithm







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 3 at 16:36









        RS PatelRS Patel

        18114




        18114








        • 5





          Strictly speaking, this is not correct. Big-O denotes an asymptotic upper bound, but it can be applied to any function, not just worst-case performance. In fact, it's not unusual to apply it to average/expected-case performance. (That said, it's true that people often use big-O to mean the big-Θ of the worst case.)

          – ruakh
          Apr 3 at 16:48











        • ya...it's right

          – RS Patel
          Apr 3 at 16:53














        • 5





          Strictly speaking, this is not correct. Big-O denotes an asymptotic upper bound, but it can be applied to any function, not just worst-case performance. In fact, it's not unusual to apply it to average/expected-case performance. (That said, it's true that people often use big-O to mean the big-Θ of the worst case.)

          – ruakh
          Apr 3 at 16:48











        • ya...it's right

          – RS Patel
          Apr 3 at 16:53








        5




        5





        Strictly speaking, this is not correct. Big-O denotes an asymptotic upper bound, but it can be applied to any function, not just worst-case performance. In fact, it's not unusual to apply it to average/expected-case performance. (That said, it's true that people often use big-O to mean the big-Θ of the worst case.)

        – ruakh
        Apr 3 at 16:48





        Strictly speaking, this is not correct. Big-O denotes an asymptotic upper bound, but it can be applied to any function, not just worst-case performance. In fact, it's not unusual to apply it to average/expected-case performance. (That said, it's true that people often use big-O to mean the big-Θ of the worst case.)

        – ruakh
        Apr 3 at 16:48













        ya...it's right

        – RS Patel
        Apr 3 at 16:53





        ya...it's right

        – RS Patel
        Apr 3 at 16:53











        0














        Big O or asymptotic notations allow us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases.



        So whenever you need to analyse your algorithm's behavior with respect to growth of the input, you will calculate this. Let me give you an example -



        Suppose you need to query over 1 billion data. So you wrote a linear search algorithm. So is it okay? How would you know? You will calculate Big-o. It's O(n) for linear search. So in worst case it would execute 1 billion instruction to query. If your machine executes 10^7 instruction per second(let's assume), then it would take 100 seconds. So you see - you are getting an runtime analysis in terms of growth of the input.






        share|improve this answer




























          0














          Big O or asymptotic notations allow us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases.



          So whenever you need to analyse your algorithm's behavior with respect to growth of the input, you will calculate this. Let me give you an example -



          Suppose you need to query over 1 billion data. So you wrote a linear search algorithm. So is it okay? How would you know? You will calculate Big-o. It's O(n) for linear search. So in worst case it would execute 1 billion instruction to query. If your machine executes 10^7 instruction per second(let's assume), then it would take 100 seconds. So you see - you are getting an runtime analysis in terms of growth of the input.






          share|improve this answer


























            0












            0








            0







            Big O or asymptotic notations allow us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases.



            So whenever you need to analyse your algorithm's behavior with respect to growth of the input, you will calculate this. Let me give you an example -



            Suppose you need to query over 1 billion data. So you wrote a linear search algorithm. So is it okay? How would you know? You will calculate Big-o. It's O(n) for linear search. So in worst case it would execute 1 billion instruction to query. If your machine executes 10^7 instruction per second(let's assume), then it would take 100 seconds. So you see - you are getting an runtime analysis in terms of growth of the input.






            share|improve this answer













            Big O or asymptotic notations allow us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases.



            So whenever you need to analyse your algorithm's behavior with respect to growth of the input, you will calculate this. Let me give you an example -



            Suppose you need to query over 1 billion data. So you wrote a linear search algorithm. So is it okay? How would you know? You will calculate Big-o. It's O(n) for linear search. So in worst case it would execute 1 billion instruction to query. If your machine executes 10^7 instruction per second(let's assume), then it would take 100 seconds. So you see - you are getting an runtime analysis in terms of growth of the input.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 3 at 16:40









            Arnab RoyArnab Roy

            397314




            397314























                0














                When we are solving an algorithmic problem we want to test the algorithm irrespective of hardware where we are running the algorithm. So we have certain asymptotic notation using which we can define the time and space complexities of our algorithm.




                1. Theta-Notation: Used for defining average case complexity as it bounds the function from top and bottom


                2. Omega-Notation: Bounds the function from below. It is used for best-time complexity


                3. Big-O Notation: This is important as it tells about worst-case complexity and it bounds the function from top.



                Now I think the answer to Why BIG-O is calculated is that using it we can get a fair idea that how bad our algorithm can perform as the size of input increases. And If we can optimize our algorithm for worst case then average and best case will take care for themselves.






                share|improve this answer




























                  0














                  When we are solving an algorithmic problem we want to test the algorithm irrespective of hardware where we are running the algorithm. So we have certain asymptotic notation using which we can define the time and space complexities of our algorithm.




                  1. Theta-Notation: Used for defining average case complexity as it bounds the function from top and bottom


                  2. Omega-Notation: Bounds the function from below. It is used for best-time complexity


                  3. Big-O Notation: This is important as it tells about worst-case complexity and it bounds the function from top.



                  Now I think the answer to Why BIG-O is calculated is that using it we can get a fair idea that how bad our algorithm can perform as the size of input increases. And If we can optimize our algorithm for worst case then average and best case will take care for themselves.






                  share|improve this answer


























                    0












                    0








                    0







                    When we are solving an algorithmic problem we want to test the algorithm irrespective of hardware where we are running the algorithm. So we have certain asymptotic notation using which we can define the time and space complexities of our algorithm.




                    1. Theta-Notation: Used for defining average case complexity as it bounds the function from top and bottom


                    2. Omega-Notation: Bounds the function from below. It is used for best-time complexity


                    3. Big-O Notation: This is important as it tells about worst-case complexity and it bounds the function from top.



                    Now I think the answer to Why BIG-O is calculated is that using it we can get a fair idea that how bad our algorithm can perform as the size of input increases. And If we can optimize our algorithm for worst case then average and best case will take care for themselves.






                    share|improve this answer













                    When we are solving an algorithmic problem we want to test the algorithm irrespective of hardware where we are running the algorithm. So we have certain asymptotic notation using which we can define the time and space complexities of our algorithm.




                    1. Theta-Notation: Used for defining average case complexity as it bounds the function from top and bottom


                    2. Omega-Notation: Bounds the function from below. It is used for best-time complexity


                    3. Big-O Notation: This is important as it tells about worst-case complexity and it bounds the function from top.



                    Now I think the answer to Why BIG-O is calculated is that using it we can get a fair idea that how bad our algorithm can perform as the size of input increases. And If we can optimize our algorithm for worst case then average and best case will take care for themselves.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Apr 3 at 16:53









                    Yug SinghYug Singh

                    1,5672926




                    1,5672926























                        0














                        I assume that you want to ask "when should I calculate time complexity?", just to avoid technicalities about Theta, Omega and Big-O.



                        Right attitude is to guess it almost always. Notable exceptions include piece of code you want to run just once and you are sure that it will never receive bigger input.



                        The emphasis on guess is because it does not matter that much whether complexity is constant or logarithmic. There is also a little difference between O(n^2) and O(n^2 log n) or between O(n^3) and O(n^4). But there is a big difference between constant and linear.



                        The main goal of the guess, is the answer to the question: "What happens if I get 10 times larger input?". If complexity is constant, nothing happens (in theory at least). If complexity is linear, you will get 10 times larger running time. If complexity is quadratic or bigger, you start to have problems.
                        Secondary goal of the guess is the answer to question: 'What is the biggest input I can handle?". Again quadratic will get you up to 10000 at most. O(2^n) ends around 25.



                        This might sound scary and time consuming, but in practice, getting time complexity of the code is rather trivial, since most of the things are either constant, logarithmic or linear.






                        share|improve this answer




























                          0














                          I assume that you want to ask "when should I calculate time complexity?", just to avoid technicalities about Theta, Omega and Big-O.



                          Right attitude is to guess it almost always. Notable exceptions include piece of code you want to run just once and you are sure that it will never receive bigger input.



                          The emphasis on guess is because it does not matter that much whether complexity is constant or logarithmic. There is also a little difference between O(n^2) and O(n^2 log n) or between O(n^3) and O(n^4). But there is a big difference between constant and linear.



                          The main goal of the guess, is the answer to the question: "What happens if I get 10 times larger input?". If complexity is constant, nothing happens (in theory at least). If complexity is linear, you will get 10 times larger running time. If complexity is quadratic or bigger, you start to have problems.
                          Secondary goal of the guess is the answer to question: 'What is the biggest input I can handle?". Again quadratic will get you up to 10000 at most. O(2^n) ends around 25.



                          This might sound scary and time consuming, but in practice, getting time complexity of the code is rather trivial, since most of the things are either constant, logarithmic or linear.






                          share|improve this answer


























                            0












                            0








                            0







                            I assume that you want to ask "when should I calculate time complexity?", just to avoid technicalities about Theta, Omega and Big-O.



                            Right attitude is to guess it almost always. Notable exceptions include piece of code you want to run just once and you are sure that it will never receive bigger input.



                            The emphasis on guess is because it does not matter that much whether complexity is constant or logarithmic. There is also a little difference between O(n^2) and O(n^2 log n) or between O(n^3) and O(n^4). But there is a big difference between constant and linear.



                            The main goal of the guess, is the answer to the question: "What happens if I get 10 times larger input?". If complexity is constant, nothing happens (in theory at least). If complexity is linear, you will get 10 times larger running time. If complexity is quadratic or bigger, you start to have problems.
                            Secondary goal of the guess is the answer to question: 'What is the biggest input I can handle?". Again quadratic will get you up to 10000 at most. O(2^n) ends around 25.



                            This might sound scary and time consuming, but in practice, getting time complexity of the code is rather trivial, since most of the things are either constant, logarithmic or linear.






                            share|improve this answer













                            I assume that you want to ask "when should I calculate time complexity?", just to avoid technicalities about Theta, Omega and Big-O.



                            Right attitude is to guess it almost always. Notable exceptions include piece of code you want to run just once and you are sure that it will never receive bigger input.



                            The emphasis on guess is because it does not matter that much whether complexity is constant or logarithmic. There is also a little difference between O(n^2) and O(n^2 log n) or between O(n^3) and O(n^4). But there is a big difference between constant and linear.



                            The main goal of the guess, is the answer to the question: "What happens if I get 10 times larger input?". If complexity is constant, nothing happens (in theory at least). If complexity is linear, you will get 10 times larger running time. If complexity is quadratic or bigger, you start to have problems.
                            Secondary goal of the guess is the answer to question: 'What is the biggest input I can handle?". Again quadratic will get you up to 10000 at most. O(2^n) ends around 25.



                            This might sound scary and time consuming, but in practice, getting time complexity of the code is rather trivial, since most of the things are either constant, logarithmic or linear.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Apr 3 at 21:30









                            usamecusamec

                            1,2201119




                            1,2201119






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Stack Overflow!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55500051%2fwhen-not-how-or-why-to-calculate-big-o-of-an-algorithm%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Plaza Victoria

                                Puebla de Zaragoza

                                Musa