Parallel Computing Problem [on hold]












2












$begingroup$


The program runs out quickly(30 seconds) if I don't do parallel.But when I replace Table with ParallelTable,the Program will keep running,no output results.



My code
https://privatebin.net/?d1b1eeff435720eb#XWLsW2gY2EfTFnX+eQCXtCVBPN4budq3wQtVWaNwI4g=










share|improve this question









$endgroup$



put on hold as off-topic by happy fish, MarcoB, Carl Lange, Szabolcs, Alex Trounev 4 hours ago


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included." – MarcoB, Carl Lange, Szabolcs, Alex Trounev

If this question can be reworded to fit the rules in the help center, please edit the question.





















    2












    $begingroup$


    The program runs out quickly(30 seconds) if I don't do parallel.But when I replace Table with ParallelTable,the Program will keep running,no output results.



    My code
    https://privatebin.net/?d1b1eeff435720eb#XWLsW2gY2EfTFnX+eQCXtCVBPN4budq3wQtVWaNwI4g=










    share|improve this question









    $endgroup$



    put on hold as off-topic by happy fish, MarcoB, Carl Lange, Szabolcs, Alex Trounev 4 hours ago


    This question appears to be off-topic. The users who voted to close gave this specific reason:


    • "This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included." – MarcoB, Carl Lange, Szabolcs, Alex Trounev

    If this question can be reworded to fit the rules in the help center, please edit the question.



















      2












      2








      2


      1



      $begingroup$


      The program runs out quickly(30 seconds) if I don't do parallel.But when I replace Table with ParallelTable,the Program will keep running,no output results.



      My code
      https://privatebin.net/?d1b1eeff435720eb#XWLsW2gY2EfTFnX+eQCXtCVBPN4budq3wQtVWaNwI4g=










      share|improve this question









      $endgroup$




      The program runs out quickly(30 seconds) if I don't do parallel.But when I replace Table with ParallelTable,the Program will keep running,no output results.



      My code
      https://privatebin.net/?d1b1eeff435720eb#XWLsW2gY2EfTFnX+eQCXtCVBPN4budq3wQtVWaNwI4g=







      parallelization






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 19 at 4:12









      guangyaguangya

      193




      193




      put on hold as off-topic by happy fish, MarcoB, Carl Lange, Szabolcs, Alex Trounev 4 hours ago


      This question appears to be off-topic. The users who voted to close gave this specific reason:


      • "This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included." – MarcoB, Carl Lange, Szabolcs, Alex Trounev

      If this question can be reworded to fit the rules in the help center, please edit the question.







      put on hold as off-topic by happy fish, MarcoB, Carl Lange, Szabolcs, Alex Trounev 4 hours ago


      This question appears to be off-topic. The users who voted to close gave this specific reason:


      • "This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included." – MarcoB, Carl Lange, Szabolcs, Alex Trounev

      If this question can be reworded to fit the rules in the help center, please edit the question.






















          2 Answers
          2






          active

          oldest

          votes


















          4












          $begingroup$

          Replace your t' with anything legal, e.g. tp will solve the problem. Assigning t' is actually doing Derivative[1][t]=1, which is not advisable.



          The reason of this strange behavior is that SubValues of derivative is not automatically distributed to kernels. Therefore you get 1'==0.6 for the main kernel, and 1'==0& for the sub kernels, and the value of this constant becomes a function which fails the later calculation.



          After making such replacement, and deleting the duplicated ParallelTable in your F definition, you can get the expected result:



          ParallelTable[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
          {4.8858, Null}
          Table[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
          {8.10208, Null}





          share|improve this answer











          $endgroup$





















            2












            $begingroup$

            Two things will provide for immense speed-up with parallel functions like ParallelTable:





            1. Launch your kernels ahead of the initial parallel call with:



              LaunchKernels["Number of Kernels, max available if left blank"]



            2. Ensure each kernel has prior knowledge of the functions with:



              DistributeDefinitions["context`"]



            You should also see some increase in speed, due to a decrease in CPU need, if you were to provide assumptions for all of your defined variable functions. What I mean by this is something like:



                f[x_?NumericQ,n_?IntegerQ]


            Wherein x is always a numerical input and n is an integer.



            I hope this helps you understand how to run parallel code better, it's a constant learning process, as we will continue to make more and more efficient ways that leave the previous best methods in the dust, and we will have to keep up :D



            So, after discussing with @happy fish, I was able to test the code, and got this output, after replacing the second ParallelTable with Table:



                {-0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029}


            There is something amiss with this, and I apologize that I cannot immediately parse what the issue is. I will take some time to check later and see if I can provide additional input, after satisfying my own duties prior ;) As for the rest:



            The issue with using two ParallelTable calls is why you would not receive output for your addition of Parallel to the second Table, again, barring my lack of understanding. Additionally, you would assuredly benefit from a functional method of implementing this code. There are numerous inline reassignments which can likely be shortened, and I anticipate that is another issue imparting itself upon your long-running/non-functional parallel implementation.






            share|improve this answer











            $endgroup$









            • 2




              $begingroup$
              Thanks for your answer, but I don't think it addresses the problem OP encountered. LaunchKernels and DistributeDefinitions are done automatically, there is no need of explicitly writing down. There won't be an "immense speed-up with parallel functions" in either case. Testing the parameter can avoid unnecessary symbolic computations, but won't help here since everything is numerical.
              $endgroup$
              – happy fish
              Apr 19 at 6:34










            • $begingroup$
              @happyfish I'm not sure that is entirely accurate, unfortunately. Though it would be nice! My understanding is as follows: When you perform the first call on a parallel function, you will spend more time than subsequent calls, this being due to the need to launch all kernels. Additionally there is some time taken to distribute definitions, if this is indeed done automatically. I am curious if there is a part of the documentation you can point to for this? I am unable to have ParallelTable actually use all kernels unless you have done as I stated, otherwise they take about a second longer.
              $endgroup$
              – CA Trevillian
              Apr 19 at 6:41






            • 1




              $begingroup$
              I agree with your general ideas on parallel evaluations. I am just saying that these theories don't localize for this particular problem. If you experiment on the problem you will find immediately that the bottleneck is not on where you focus: it's just distributing 10 difficult tasks to 6(by default) kernels, the overhead of subsequent calls and copying definitions is negligible. For the automatically distribute definition part, please refer to the first example in Options->DistributedContexts and mathematica.stackexchange.com/questions/39178/…
              $endgroup$
              – happy fish
              Apr 19 at 6:49






            • 1




              $begingroup$
              Condensed matter physics.Bott index,is a kind of Chern number@CATrevillian
              $endgroup$
              – guangya
              Apr 19 at 10:57






            • 1




              $begingroup$
              we use it to judge whether a substance is trivial topology or not@CATrevillian. Nobel Prize in 2016
              $endgroup$
              – guangya
              Apr 19 at 11:02


















            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4












            $begingroup$

            Replace your t' with anything legal, e.g. tp will solve the problem. Assigning t' is actually doing Derivative[1][t]=1, which is not advisable.



            The reason of this strange behavior is that SubValues of derivative is not automatically distributed to kernels. Therefore you get 1'==0.6 for the main kernel, and 1'==0& for the sub kernels, and the value of this constant becomes a function which fails the later calculation.



            After making such replacement, and deleting the duplicated ParallelTable in your F definition, you can get the expected result:



            ParallelTable[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
            {4.8858, Null}
            Table[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
            {8.10208, Null}





            share|improve this answer











            $endgroup$


















              4












              $begingroup$

              Replace your t' with anything legal, e.g. tp will solve the problem. Assigning t' is actually doing Derivative[1][t]=1, which is not advisable.



              The reason of this strange behavior is that SubValues of derivative is not automatically distributed to kernels. Therefore you get 1'==0.6 for the main kernel, and 1'==0& for the sub kernels, and the value of this constant becomes a function which fails the later calculation.



              After making such replacement, and deleting the duplicated ParallelTable in your F definition, you can get the expected result:



              ParallelTable[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
              {4.8858, Null}
              Table[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
              {8.10208, Null}





              share|improve this answer











              $endgroup$
















                4












                4








                4





                $begingroup$

                Replace your t' with anything legal, e.g. tp will solve the problem. Assigning t' is actually doing Derivative[1][t]=1, which is not advisable.



                The reason of this strange behavior is that SubValues of derivative is not automatically distributed to kernels. Therefore you get 1'==0.6 for the main kernel, and 1'==0& for the sub kernels, and the value of this constant becomes a function which fails the later calculation.



                After making such replacement, and deleting the duplicated ParallelTable in your F definition, you can get the expected result:



                ParallelTable[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
                {4.8858, Null}
                Table[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
                {8.10208, Null}





                share|improve this answer











                $endgroup$



                Replace your t' with anything legal, e.g. tp will solve the problem. Assigning t' is actually doing Derivative[1][t]=1, which is not advisable.



                The reason of this strange behavior is that SubValues of derivative is not automatically distributed to kernels. Therefore you get 1'==0.6 for the main kernel, and 1'==0& for the sub kernels, and the value of this constant becomes a function which fails the later calculation.



                After making such replacement, and deleting the duplicated ParallelTable in your F definition, you can get the expected result:



                ParallelTable[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
                {4.8858, Null}
                Table[F[0, 0, k], {k, 1, 10}]; // AbsoluteTiming
                {8.10208, Null}






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Apr 19 at 8:58

























                answered Apr 19 at 8:47









                happy fishhappy fish

                5,58121546




                5,58121546























                    2












                    $begingroup$

                    Two things will provide for immense speed-up with parallel functions like ParallelTable:





                    1. Launch your kernels ahead of the initial parallel call with:



                      LaunchKernels["Number of Kernels, max available if left blank"]



                    2. Ensure each kernel has prior knowledge of the functions with:



                      DistributeDefinitions["context`"]



                    You should also see some increase in speed, due to a decrease in CPU need, if you were to provide assumptions for all of your defined variable functions. What I mean by this is something like:



                        f[x_?NumericQ,n_?IntegerQ]


                    Wherein x is always a numerical input and n is an integer.



                    I hope this helps you understand how to run parallel code better, it's a constant learning process, as we will continue to make more and more efficient ways that leave the previous best methods in the dust, and we will have to keep up :D



                    So, after discussing with @happy fish, I was able to test the code, and got this output, after replacing the second ParallelTable with Table:



                        {-0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029}


                    There is something amiss with this, and I apologize that I cannot immediately parse what the issue is. I will take some time to check later and see if I can provide additional input, after satisfying my own duties prior ;) As for the rest:



                    The issue with using two ParallelTable calls is why you would not receive output for your addition of Parallel to the second Table, again, barring my lack of understanding. Additionally, you would assuredly benefit from a functional method of implementing this code. There are numerous inline reassignments which can likely be shortened, and I anticipate that is another issue imparting itself upon your long-running/non-functional parallel implementation.






                    share|improve this answer











                    $endgroup$









                    • 2




                      $begingroup$
                      Thanks for your answer, but I don't think it addresses the problem OP encountered. LaunchKernels and DistributeDefinitions are done automatically, there is no need of explicitly writing down. There won't be an "immense speed-up with parallel functions" in either case. Testing the parameter can avoid unnecessary symbolic computations, but won't help here since everything is numerical.
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:34










                    • $begingroup$
                      @happyfish I'm not sure that is entirely accurate, unfortunately. Though it would be nice! My understanding is as follows: When you perform the first call on a parallel function, you will spend more time than subsequent calls, this being due to the need to launch all kernels. Additionally there is some time taken to distribute definitions, if this is indeed done automatically. I am curious if there is a part of the documentation you can point to for this? I am unable to have ParallelTable actually use all kernels unless you have done as I stated, otherwise they take about a second longer.
                      $endgroup$
                      – CA Trevillian
                      Apr 19 at 6:41






                    • 1




                      $begingroup$
                      I agree with your general ideas on parallel evaluations. I am just saying that these theories don't localize for this particular problem. If you experiment on the problem you will find immediately that the bottleneck is not on where you focus: it's just distributing 10 difficult tasks to 6(by default) kernels, the overhead of subsequent calls and copying definitions is negligible. For the automatically distribute definition part, please refer to the first example in Options->DistributedContexts and mathematica.stackexchange.com/questions/39178/…
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:49






                    • 1




                      $begingroup$
                      Condensed matter physics.Bott index,is a kind of Chern number@CATrevillian
                      $endgroup$
                      – guangya
                      Apr 19 at 10:57






                    • 1




                      $begingroup$
                      we use it to judge whether a substance is trivial topology or not@CATrevillian. Nobel Prize in 2016
                      $endgroup$
                      – guangya
                      Apr 19 at 11:02
















                    2












                    $begingroup$

                    Two things will provide for immense speed-up with parallel functions like ParallelTable:





                    1. Launch your kernels ahead of the initial parallel call with:



                      LaunchKernels["Number of Kernels, max available if left blank"]



                    2. Ensure each kernel has prior knowledge of the functions with:



                      DistributeDefinitions["context`"]



                    You should also see some increase in speed, due to a decrease in CPU need, if you were to provide assumptions for all of your defined variable functions. What I mean by this is something like:



                        f[x_?NumericQ,n_?IntegerQ]


                    Wherein x is always a numerical input and n is an integer.



                    I hope this helps you understand how to run parallel code better, it's a constant learning process, as we will continue to make more and more efficient ways that leave the previous best methods in the dust, and we will have to keep up :D



                    So, after discussing with @happy fish, I was able to test the code, and got this output, after replacing the second ParallelTable with Table:



                        {-0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029}


                    There is something amiss with this, and I apologize that I cannot immediately parse what the issue is. I will take some time to check later and see if I can provide additional input, after satisfying my own duties prior ;) As for the rest:



                    The issue with using two ParallelTable calls is why you would not receive output for your addition of Parallel to the second Table, again, barring my lack of understanding. Additionally, you would assuredly benefit from a functional method of implementing this code. There are numerous inline reassignments which can likely be shortened, and I anticipate that is another issue imparting itself upon your long-running/non-functional parallel implementation.






                    share|improve this answer











                    $endgroup$









                    • 2




                      $begingroup$
                      Thanks for your answer, but I don't think it addresses the problem OP encountered. LaunchKernels and DistributeDefinitions are done automatically, there is no need of explicitly writing down. There won't be an "immense speed-up with parallel functions" in either case. Testing the parameter can avoid unnecessary symbolic computations, but won't help here since everything is numerical.
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:34










                    • $begingroup$
                      @happyfish I'm not sure that is entirely accurate, unfortunately. Though it would be nice! My understanding is as follows: When you perform the first call on a parallel function, you will spend more time than subsequent calls, this being due to the need to launch all kernels. Additionally there is some time taken to distribute definitions, if this is indeed done automatically. I am curious if there is a part of the documentation you can point to for this? I am unable to have ParallelTable actually use all kernels unless you have done as I stated, otherwise they take about a second longer.
                      $endgroup$
                      – CA Trevillian
                      Apr 19 at 6:41






                    • 1




                      $begingroup$
                      I agree with your general ideas on parallel evaluations. I am just saying that these theories don't localize for this particular problem. If you experiment on the problem you will find immediately that the bottleneck is not on where you focus: it's just distributing 10 difficult tasks to 6(by default) kernels, the overhead of subsequent calls and copying definitions is negligible. For the automatically distribute definition part, please refer to the first example in Options->DistributedContexts and mathematica.stackexchange.com/questions/39178/…
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:49






                    • 1




                      $begingroup$
                      Condensed matter physics.Bott index,is a kind of Chern number@CATrevillian
                      $endgroup$
                      – guangya
                      Apr 19 at 10:57






                    • 1




                      $begingroup$
                      we use it to judge whether a substance is trivial topology or not@CATrevillian. Nobel Prize in 2016
                      $endgroup$
                      – guangya
                      Apr 19 at 11:02














                    2












                    2








                    2





                    $begingroup$

                    Two things will provide for immense speed-up with parallel functions like ParallelTable:





                    1. Launch your kernels ahead of the initial parallel call with:



                      LaunchKernels["Number of Kernels, max available if left blank"]



                    2. Ensure each kernel has prior knowledge of the functions with:



                      DistributeDefinitions["context`"]



                    You should also see some increase in speed, due to a decrease in CPU need, if you were to provide assumptions for all of your defined variable functions. What I mean by this is something like:



                        f[x_?NumericQ,n_?IntegerQ]


                    Wherein x is always a numerical input and n is an integer.



                    I hope this helps you understand how to run parallel code better, it's a constant learning process, as we will continue to make more and more efficient ways that leave the previous best methods in the dust, and we will have to keep up :D



                    So, after discussing with @happy fish, I was able to test the code, and got this output, after replacing the second ParallelTable with Table:



                        {-0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029}


                    There is something amiss with this, and I apologize that I cannot immediately parse what the issue is. I will take some time to check later and see if I can provide additional input, after satisfying my own duties prior ;) As for the rest:



                    The issue with using two ParallelTable calls is why you would not receive output for your addition of Parallel to the second Table, again, barring my lack of understanding. Additionally, you would assuredly benefit from a functional method of implementing this code. There are numerous inline reassignments which can likely be shortened, and I anticipate that is another issue imparting itself upon your long-running/non-functional parallel implementation.






                    share|improve this answer











                    $endgroup$



                    Two things will provide for immense speed-up with parallel functions like ParallelTable:





                    1. Launch your kernels ahead of the initial parallel call with:



                      LaunchKernels["Number of Kernels, max available if left blank"]



                    2. Ensure each kernel has prior knowledge of the functions with:



                      DistributeDefinitions["context`"]



                    You should also see some increase in speed, due to a decrease in CPU need, if you were to provide assumptions for all of your defined variable functions. What I mean by this is something like:



                        f[x_?NumericQ,n_?IntegerQ]


                    Wherein x is always a numerical input and n is an integer.



                    I hope this helps you understand how to run parallel code better, it's a constant learning process, as we will continue to make more and more efficient ways that leave the previous best methods in the dust, and we will have to keep up :D



                    So, after discussing with @happy fish, I was able to test the code, and got this output, after replacing the second ParallelTable with Table:



                        {-0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029, -0.463029}


                    There is something amiss with this, and I apologize that I cannot immediately parse what the issue is. I will take some time to check later and see if I can provide additional input, after satisfying my own duties prior ;) As for the rest:



                    The issue with using two ParallelTable calls is why you would not receive output for your addition of Parallel to the second Table, again, barring my lack of understanding. Additionally, you would assuredly benefit from a functional method of implementing this code. There are numerous inline reassignments which can likely be shortened, and I anticipate that is another issue imparting itself upon your long-running/non-functional parallel implementation.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Apr 19 at 7:38

























                    answered Apr 19 at 6:16









                    CA TrevillianCA Trevillian

                    838




                    838








                    • 2




                      $begingroup$
                      Thanks for your answer, but I don't think it addresses the problem OP encountered. LaunchKernels and DistributeDefinitions are done automatically, there is no need of explicitly writing down. There won't be an "immense speed-up with parallel functions" in either case. Testing the parameter can avoid unnecessary symbolic computations, but won't help here since everything is numerical.
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:34










                    • $begingroup$
                      @happyfish I'm not sure that is entirely accurate, unfortunately. Though it would be nice! My understanding is as follows: When you perform the first call on a parallel function, you will spend more time than subsequent calls, this being due to the need to launch all kernels. Additionally there is some time taken to distribute definitions, if this is indeed done automatically. I am curious if there is a part of the documentation you can point to for this? I am unable to have ParallelTable actually use all kernels unless you have done as I stated, otherwise they take about a second longer.
                      $endgroup$
                      – CA Trevillian
                      Apr 19 at 6:41






                    • 1




                      $begingroup$
                      I agree with your general ideas on parallel evaluations. I am just saying that these theories don't localize for this particular problem. If you experiment on the problem you will find immediately that the bottleneck is not on where you focus: it's just distributing 10 difficult tasks to 6(by default) kernels, the overhead of subsequent calls and copying definitions is negligible. For the automatically distribute definition part, please refer to the first example in Options->DistributedContexts and mathematica.stackexchange.com/questions/39178/…
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:49






                    • 1




                      $begingroup$
                      Condensed matter physics.Bott index,is a kind of Chern number@CATrevillian
                      $endgroup$
                      – guangya
                      Apr 19 at 10:57






                    • 1




                      $begingroup$
                      we use it to judge whether a substance is trivial topology or not@CATrevillian. Nobel Prize in 2016
                      $endgroup$
                      – guangya
                      Apr 19 at 11:02














                    • 2




                      $begingroup$
                      Thanks for your answer, but I don't think it addresses the problem OP encountered. LaunchKernels and DistributeDefinitions are done automatically, there is no need of explicitly writing down. There won't be an "immense speed-up with parallel functions" in either case. Testing the parameter can avoid unnecessary symbolic computations, but won't help here since everything is numerical.
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:34










                    • $begingroup$
                      @happyfish I'm not sure that is entirely accurate, unfortunately. Though it would be nice! My understanding is as follows: When you perform the first call on a parallel function, you will spend more time than subsequent calls, this being due to the need to launch all kernels. Additionally there is some time taken to distribute definitions, if this is indeed done automatically. I am curious if there is a part of the documentation you can point to for this? I am unable to have ParallelTable actually use all kernels unless you have done as I stated, otherwise they take about a second longer.
                      $endgroup$
                      – CA Trevillian
                      Apr 19 at 6:41






                    • 1




                      $begingroup$
                      I agree with your general ideas on parallel evaluations. I am just saying that these theories don't localize for this particular problem. If you experiment on the problem you will find immediately that the bottleneck is not on where you focus: it's just distributing 10 difficult tasks to 6(by default) kernels, the overhead of subsequent calls and copying definitions is negligible. For the automatically distribute definition part, please refer to the first example in Options->DistributedContexts and mathematica.stackexchange.com/questions/39178/…
                      $endgroup$
                      – happy fish
                      Apr 19 at 6:49






                    • 1




                      $begingroup$
                      Condensed matter physics.Bott index,is a kind of Chern number@CATrevillian
                      $endgroup$
                      – guangya
                      Apr 19 at 10:57






                    • 1




                      $begingroup$
                      we use it to judge whether a substance is trivial topology or not@CATrevillian. Nobel Prize in 2016
                      $endgroup$
                      – guangya
                      Apr 19 at 11:02








                    2




                    2




                    $begingroup$
                    Thanks for your answer, but I don't think it addresses the problem OP encountered. LaunchKernels and DistributeDefinitions are done automatically, there is no need of explicitly writing down. There won't be an "immense speed-up with parallel functions" in either case. Testing the parameter can avoid unnecessary symbolic computations, but won't help here since everything is numerical.
                    $endgroup$
                    – happy fish
                    Apr 19 at 6:34




                    $begingroup$
                    Thanks for your answer, but I don't think it addresses the problem OP encountered. LaunchKernels and DistributeDefinitions are done automatically, there is no need of explicitly writing down. There won't be an "immense speed-up with parallel functions" in either case. Testing the parameter can avoid unnecessary symbolic computations, but won't help here since everything is numerical.
                    $endgroup$
                    – happy fish
                    Apr 19 at 6:34












                    $begingroup$
                    @happyfish I'm not sure that is entirely accurate, unfortunately. Though it would be nice! My understanding is as follows: When you perform the first call on a parallel function, you will spend more time than subsequent calls, this being due to the need to launch all kernels. Additionally there is some time taken to distribute definitions, if this is indeed done automatically. I am curious if there is a part of the documentation you can point to for this? I am unable to have ParallelTable actually use all kernels unless you have done as I stated, otherwise they take about a second longer.
                    $endgroup$
                    – CA Trevillian
                    Apr 19 at 6:41




                    $begingroup$
                    @happyfish I'm not sure that is entirely accurate, unfortunately. Though it would be nice! My understanding is as follows: When you perform the first call on a parallel function, you will spend more time than subsequent calls, this being due to the need to launch all kernels. Additionally there is some time taken to distribute definitions, if this is indeed done automatically. I am curious if there is a part of the documentation you can point to for this? I am unable to have ParallelTable actually use all kernels unless you have done as I stated, otherwise they take about a second longer.
                    $endgroup$
                    – CA Trevillian
                    Apr 19 at 6:41




                    1




                    1




                    $begingroup$
                    I agree with your general ideas on parallel evaluations. I am just saying that these theories don't localize for this particular problem. If you experiment on the problem you will find immediately that the bottleneck is not on where you focus: it's just distributing 10 difficult tasks to 6(by default) kernels, the overhead of subsequent calls and copying definitions is negligible. For the automatically distribute definition part, please refer to the first example in Options->DistributedContexts and mathematica.stackexchange.com/questions/39178/…
                    $endgroup$
                    – happy fish
                    Apr 19 at 6:49




                    $begingroup$
                    I agree with your general ideas on parallel evaluations. I am just saying that these theories don't localize for this particular problem. If you experiment on the problem you will find immediately that the bottleneck is not on where you focus: it's just distributing 10 difficult tasks to 6(by default) kernels, the overhead of subsequent calls and copying definitions is negligible. For the automatically distribute definition part, please refer to the first example in Options->DistributedContexts and mathematica.stackexchange.com/questions/39178/…
                    $endgroup$
                    – happy fish
                    Apr 19 at 6:49




                    1




                    1




                    $begingroup$
                    Condensed matter physics.Bott index,is a kind of Chern number@CATrevillian
                    $endgroup$
                    – guangya
                    Apr 19 at 10:57




                    $begingroup$
                    Condensed matter physics.Bott index,is a kind of Chern number@CATrevillian
                    $endgroup$
                    – guangya
                    Apr 19 at 10:57




                    1




                    1




                    $begingroup$
                    we use it to judge whether a substance is trivial topology or not@CATrevillian. Nobel Prize in 2016
                    $endgroup$
                    – guangya
                    Apr 19 at 11:02




                    $begingroup$
                    we use it to judge whether a substance is trivial topology or not@CATrevillian. Nobel Prize in 2016
                    $endgroup$
                    – guangya
                    Apr 19 at 11:02



                    Popular posts from this blog

                    Plaza Victoria

                    Puebla de Zaragoza

                    Musa