Python - Performance comparison for set() vs {}












8














A discussion following this question left me wondering, so I decided to run a few tests and compare the creation time of set((x,y,z)) vs. {x,y,z} for creating sets in Python (I'm using Python 3.7).



I compared the two methods using time and timeit.
Both were consistent* with the following results:



test1 = """
my_set1 = set((1, 2, 3))
"""
print(timeit(test1))


Result: 0.30240735499999993



test2 = """
my_set2 = {1,2,3}
"""
print(timeit(test2))


Result: 0.10771795900000003



So the second method was almost 3 times faster than the first.
This was quite a surprising difference to me.
What is happening under the hood to optimize the performance of the set literal over the set() method in such a way?



* Note: I only show the results of the timeit tests since they are averaged over many samples, and thus perhaps more reliable, but the results when testing with time showed similar differences in both cases.










share|improve this question
























  • Related stackoverflow.com/questions/36674083/…
    – snakecharmerb
    8 mins ago
















8














A discussion following this question left me wondering, so I decided to run a few tests and compare the creation time of set((x,y,z)) vs. {x,y,z} for creating sets in Python (I'm using Python 3.7).



I compared the two methods using time and timeit.
Both were consistent* with the following results:



test1 = """
my_set1 = set((1, 2, 3))
"""
print(timeit(test1))


Result: 0.30240735499999993



test2 = """
my_set2 = {1,2,3}
"""
print(timeit(test2))


Result: 0.10771795900000003



So the second method was almost 3 times faster than the first.
This was quite a surprising difference to me.
What is happening under the hood to optimize the performance of the set literal over the set() method in such a way?



* Note: I only show the results of the timeit tests since they are averaged over many samples, and thus perhaps more reliable, but the results when testing with time showed similar differences in both cases.










share|improve this question
























  • Related stackoverflow.com/questions/36674083/…
    – snakecharmerb
    8 mins ago














8












8








8







A discussion following this question left me wondering, so I decided to run a few tests and compare the creation time of set((x,y,z)) vs. {x,y,z} for creating sets in Python (I'm using Python 3.7).



I compared the two methods using time and timeit.
Both were consistent* with the following results:



test1 = """
my_set1 = set((1, 2, 3))
"""
print(timeit(test1))


Result: 0.30240735499999993



test2 = """
my_set2 = {1,2,3}
"""
print(timeit(test2))


Result: 0.10771795900000003



So the second method was almost 3 times faster than the first.
This was quite a surprising difference to me.
What is happening under the hood to optimize the performance of the set literal over the set() method in such a way?



* Note: I only show the results of the timeit tests since they are averaged over many samples, and thus perhaps more reliable, but the results when testing with time showed similar differences in both cases.










share|improve this question















A discussion following this question left me wondering, so I decided to run a few tests and compare the creation time of set((x,y,z)) vs. {x,y,z} for creating sets in Python (I'm using Python 3.7).



I compared the two methods using time and timeit.
Both were consistent* with the following results:



test1 = """
my_set1 = set((1, 2, 3))
"""
print(timeit(test1))


Result: 0.30240735499999993



test2 = """
my_set2 = {1,2,3}
"""
print(timeit(test2))


Result: 0.10771795900000003



So the second method was almost 3 times faster than the first.
This was quite a surprising difference to me.
What is happening under the hood to optimize the performance of the set literal over the set() method in such a way?



* Note: I only show the results of the timeit tests since they are averaged over many samples, and thus perhaps more reliable, but the results when testing with time showed similar differences in both cases.







python python-3.x performance set






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 1 hour ago

























asked 1 hour ago









YuvalG

280114




280114












  • Related stackoverflow.com/questions/36674083/…
    – snakecharmerb
    8 mins ago


















  • Related stackoverflow.com/questions/36674083/…
    – snakecharmerb
    8 mins ago
















Related stackoverflow.com/questions/36674083/…
– snakecharmerb
8 mins ago




Related stackoverflow.com/questions/36674083/…
– snakecharmerb
8 mins ago












1 Answer
1






active

oldest

votes


















16














(This is in response to code that has now been edited out of the initial question) You forgot to call the functions in the second case. Making the appropriate modifications, the results are as expected:



test1 = """
def foo1():
my_set1 = set((1, 2, 3))
foo1()
"""
timeit(test1)
# 0.48808742000255734




test2 = """
def foo2():
my_set2 = {1,2,3}
foo2()
"""
timeit(test2)
# 0.3064506609807722




Now, the reason for the difference in timings is because set() is a function call requiring a lookup into the symbol table, whereas the {...} set construction is an artefact of the syntax, and is much faster.



The difference is obvious when observing the disassembled byte code.



import dis

dis.dis("set((1, 2, 3))")
1 0 LOAD_NAME 0 (set)
2 LOAD_CONST 3 ((1, 2, 3))
4 CALL_FUNCTION 1
6 RETURN_VALUE




dis.dis("{1, 2, 3}")
1 0 LOAD_CONST 0 (1)
2 LOAD_CONST 1 (2)
4 LOAD_CONST 2 (3)
6 BUILD_SET 3
8 RETURN_VALUE


In the first case, a function call is made by the instruction CALL_FUNCTION on the tuple (1, 2, 3) (which also comes with its own overhead, although minor—it is loaded as a constant via LOAD_CONST), whereas in the second instruction is just a BUILD_SET call, which is more efficient.



Re: your question regarding the time taken for tuple construction, we see this is actually negligible:



timeit("""(1, 2, 3)""")
# 0.01858693000394851

timeit("""{1, 2, 3}""")
# 0.11971827200613916


This is because python interns small tuples (you can see this clearly from the LOAD_CONST instruction above), so the time taken is negligible.





For larger sequences, we see similar behaviour. {..} syntax is faster at constructing sets using set comprehensions as opposed to set() which has to build the set from a generator.



timeit("""set(i for i in range(10000))""", number=1000)
# 0.9775058150407858

timeit("""{i for i in range(10000)}""", number=1000)
# 0.5508635920123197


Interestingly, however, set() is optimised for range:



timeit("""set(range(10000))""", number=1000)
# 0.3746800610097125


This happens to be faster than the set construction. You will see similar behaviour for other sequences (such as lists).



My recommendation would be to use the {...} set comprehension when constructing set literals, and as an alternative to passing a generator comprehension to set(); and instead use set() to convert an existing sequence/iterable to a set.






share|improve this answer



















  • 1




    He is also creating a tuple and then pass it to the set function, does the tuple creation times count?
    – Daniel Mesejo
    1 hour ago






  • 2




    @DanielMesejo Perhaps, but I cannot be sure. In this case, maybe not as I believe python interns (caches) tuples, so after the first couple of timeits that might not cause much of a difference in timings. But in theory, yes, it will contribute.
    – coldspeed
    1 hour ago












  • Whoops! Silly me, that definitely explains it. Editing the question, the first part is still rather intruiging, I'd love a more in depth view of what's happening. Was also wondering about what @DanielMesejo asked.
    – YuvalG
    1 hour ago






  • 1




    @DanielMesejo might be wrong but from the bytecode, it seems it does not create the tuple, it is built as a constant when python code is initially parsed. It just LOAD_CONSTs the tuple. The overhead comes after, building the set from that tuple.
    – spectras
    1 hour ago












  • @spectras You're right, that actually proves the point that the tuple is cached after the initial call.
    – coldspeed
    1 hour ago











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53977997%2fpython-performance-comparison-for-set-vs%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









16














(This is in response to code that has now been edited out of the initial question) You forgot to call the functions in the second case. Making the appropriate modifications, the results are as expected:



test1 = """
def foo1():
my_set1 = set((1, 2, 3))
foo1()
"""
timeit(test1)
# 0.48808742000255734




test2 = """
def foo2():
my_set2 = {1,2,3}
foo2()
"""
timeit(test2)
# 0.3064506609807722




Now, the reason for the difference in timings is because set() is a function call requiring a lookup into the symbol table, whereas the {...} set construction is an artefact of the syntax, and is much faster.



The difference is obvious when observing the disassembled byte code.



import dis

dis.dis("set((1, 2, 3))")
1 0 LOAD_NAME 0 (set)
2 LOAD_CONST 3 ((1, 2, 3))
4 CALL_FUNCTION 1
6 RETURN_VALUE




dis.dis("{1, 2, 3}")
1 0 LOAD_CONST 0 (1)
2 LOAD_CONST 1 (2)
4 LOAD_CONST 2 (3)
6 BUILD_SET 3
8 RETURN_VALUE


In the first case, a function call is made by the instruction CALL_FUNCTION on the tuple (1, 2, 3) (which also comes with its own overhead, although minor—it is loaded as a constant via LOAD_CONST), whereas in the second instruction is just a BUILD_SET call, which is more efficient.



Re: your question regarding the time taken for tuple construction, we see this is actually negligible:



timeit("""(1, 2, 3)""")
# 0.01858693000394851

timeit("""{1, 2, 3}""")
# 0.11971827200613916


This is because python interns small tuples (you can see this clearly from the LOAD_CONST instruction above), so the time taken is negligible.





For larger sequences, we see similar behaviour. {..} syntax is faster at constructing sets using set comprehensions as opposed to set() which has to build the set from a generator.



timeit("""set(i for i in range(10000))""", number=1000)
# 0.9775058150407858

timeit("""{i for i in range(10000)}""", number=1000)
# 0.5508635920123197


Interestingly, however, set() is optimised for range:



timeit("""set(range(10000))""", number=1000)
# 0.3746800610097125


This happens to be faster than the set construction. You will see similar behaviour for other sequences (such as lists).



My recommendation would be to use the {...} set comprehension when constructing set literals, and as an alternative to passing a generator comprehension to set(); and instead use set() to convert an existing sequence/iterable to a set.






share|improve this answer



















  • 1




    He is also creating a tuple and then pass it to the set function, does the tuple creation times count?
    – Daniel Mesejo
    1 hour ago






  • 2




    @DanielMesejo Perhaps, but I cannot be sure. In this case, maybe not as I believe python interns (caches) tuples, so after the first couple of timeits that might not cause much of a difference in timings. But in theory, yes, it will contribute.
    – coldspeed
    1 hour ago












  • Whoops! Silly me, that definitely explains it. Editing the question, the first part is still rather intruiging, I'd love a more in depth view of what's happening. Was also wondering about what @DanielMesejo asked.
    – YuvalG
    1 hour ago






  • 1




    @DanielMesejo might be wrong but from the bytecode, it seems it does not create the tuple, it is built as a constant when python code is initially parsed. It just LOAD_CONSTs the tuple. The overhead comes after, building the set from that tuple.
    – spectras
    1 hour ago












  • @spectras You're right, that actually proves the point that the tuple is cached after the initial call.
    – coldspeed
    1 hour ago
















16














(This is in response to code that has now been edited out of the initial question) You forgot to call the functions in the second case. Making the appropriate modifications, the results are as expected:



test1 = """
def foo1():
my_set1 = set((1, 2, 3))
foo1()
"""
timeit(test1)
# 0.48808742000255734




test2 = """
def foo2():
my_set2 = {1,2,3}
foo2()
"""
timeit(test2)
# 0.3064506609807722




Now, the reason for the difference in timings is because set() is a function call requiring a lookup into the symbol table, whereas the {...} set construction is an artefact of the syntax, and is much faster.



The difference is obvious when observing the disassembled byte code.



import dis

dis.dis("set((1, 2, 3))")
1 0 LOAD_NAME 0 (set)
2 LOAD_CONST 3 ((1, 2, 3))
4 CALL_FUNCTION 1
6 RETURN_VALUE




dis.dis("{1, 2, 3}")
1 0 LOAD_CONST 0 (1)
2 LOAD_CONST 1 (2)
4 LOAD_CONST 2 (3)
6 BUILD_SET 3
8 RETURN_VALUE


In the first case, a function call is made by the instruction CALL_FUNCTION on the tuple (1, 2, 3) (which also comes with its own overhead, although minor—it is loaded as a constant via LOAD_CONST), whereas in the second instruction is just a BUILD_SET call, which is more efficient.



Re: your question regarding the time taken for tuple construction, we see this is actually negligible:



timeit("""(1, 2, 3)""")
# 0.01858693000394851

timeit("""{1, 2, 3}""")
# 0.11971827200613916


This is because python interns small tuples (you can see this clearly from the LOAD_CONST instruction above), so the time taken is negligible.





For larger sequences, we see similar behaviour. {..} syntax is faster at constructing sets using set comprehensions as opposed to set() which has to build the set from a generator.



timeit("""set(i for i in range(10000))""", number=1000)
# 0.9775058150407858

timeit("""{i for i in range(10000)}""", number=1000)
# 0.5508635920123197


Interestingly, however, set() is optimised for range:



timeit("""set(range(10000))""", number=1000)
# 0.3746800610097125


This happens to be faster than the set construction. You will see similar behaviour for other sequences (such as lists).



My recommendation would be to use the {...} set comprehension when constructing set literals, and as an alternative to passing a generator comprehension to set(); and instead use set() to convert an existing sequence/iterable to a set.






share|improve this answer



















  • 1




    He is also creating a tuple and then pass it to the set function, does the tuple creation times count?
    – Daniel Mesejo
    1 hour ago






  • 2




    @DanielMesejo Perhaps, but I cannot be sure. In this case, maybe not as I believe python interns (caches) tuples, so after the first couple of timeits that might not cause much of a difference in timings. But in theory, yes, it will contribute.
    – coldspeed
    1 hour ago












  • Whoops! Silly me, that definitely explains it. Editing the question, the first part is still rather intruiging, I'd love a more in depth view of what's happening. Was also wondering about what @DanielMesejo asked.
    – YuvalG
    1 hour ago






  • 1




    @DanielMesejo might be wrong but from the bytecode, it seems it does not create the tuple, it is built as a constant when python code is initially parsed. It just LOAD_CONSTs the tuple. The overhead comes after, building the set from that tuple.
    – spectras
    1 hour ago












  • @spectras You're right, that actually proves the point that the tuple is cached after the initial call.
    – coldspeed
    1 hour ago














16












16








16






(This is in response to code that has now been edited out of the initial question) You forgot to call the functions in the second case. Making the appropriate modifications, the results are as expected:



test1 = """
def foo1():
my_set1 = set((1, 2, 3))
foo1()
"""
timeit(test1)
# 0.48808742000255734




test2 = """
def foo2():
my_set2 = {1,2,3}
foo2()
"""
timeit(test2)
# 0.3064506609807722




Now, the reason for the difference in timings is because set() is a function call requiring a lookup into the symbol table, whereas the {...} set construction is an artefact of the syntax, and is much faster.



The difference is obvious when observing the disassembled byte code.



import dis

dis.dis("set((1, 2, 3))")
1 0 LOAD_NAME 0 (set)
2 LOAD_CONST 3 ((1, 2, 3))
4 CALL_FUNCTION 1
6 RETURN_VALUE




dis.dis("{1, 2, 3}")
1 0 LOAD_CONST 0 (1)
2 LOAD_CONST 1 (2)
4 LOAD_CONST 2 (3)
6 BUILD_SET 3
8 RETURN_VALUE


In the first case, a function call is made by the instruction CALL_FUNCTION on the tuple (1, 2, 3) (which also comes with its own overhead, although minor—it is loaded as a constant via LOAD_CONST), whereas in the second instruction is just a BUILD_SET call, which is more efficient.



Re: your question regarding the time taken for tuple construction, we see this is actually negligible:



timeit("""(1, 2, 3)""")
# 0.01858693000394851

timeit("""{1, 2, 3}""")
# 0.11971827200613916


This is because python interns small tuples (you can see this clearly from the LOAD_CONST instruction above), so the time taken is negligible.





For larger sequences, we see similar behaviour. {..} syntax is faster at constructing sets using set comprehensions as opposed to set() which has to build the set from a generator.



timeit("""set(i for i in range(10000))""", number=1000)
# 0.9775058150407858

timeit("""{i for i in range(10000)}""", number=1000)
# 0.5508635920123197


Interestingly, however, set() is optimised for range:



timeit("""set(range(10000))""", number=1000)
# 0.3746800610097125


This happens to be faster than the set construction. You will see similar behaviour for other sequences (such as lists).



My recommendation would be to use the {...} set comprehension when constructing set literals, and as an alternative to passing a generator comprehension to set(); and instead use set() to convert an existing sequence/iterable to a set.






share|improve this answer














(This is in response to code that has now been edited out of the initial question) You forgot to call the functions in the second case. Making the appropriate modifications, the results are as expected:



test1 = """
def foo1():
my_set1 = set((1, 2, 3))
foo1()
"""
timeit(test1)
# 0.48808742000255734




test2 = """
def foo2():
my_set2 = {1,2,3}
foo2()
"""
timeit(test2)
# 0.3064506609807722




Now, the reason for the difference in timings is because set() is a function call requiring a lookup into the symbol table, whereas the {...} set construction is an artefact of the syntax, and is much faster.



The difference is obvious when observing the disassembled byte code.



import dis

dis.dis("set((1, 2, 3))")
1 0 LOAD_NAME 0 (set)
2 LOAD_CONST 3 ((1, 2, 3))
4 CALL_FUNCTION 1
6 RETURN_VALUE




dis.dis("{1, 2, 3}")
1 0 LOAD_CONST 0 (1)
2 LOAD_CONST 1 (2)
4 LOAD_CONST 2 (3)
6 BUILD_SET 3
8 RETURN_VALUE


In the first case, a function call is made by the instruction CALL_FUNCTION on the tuple (1, 2, 3) (which also comes with its own overhead, although minor—it is loaded as a constant via LOAD_CONST), whereas in the second instruction is just a BUILD_SET call, which is more efficient.



Re: your question regarding the time taken for tuple construction, we see this is actually negligible:



timeit("""(1, 2, 3)""")
# 0.01858693000394851

timeit("""{1, 2, 3}""")
# 0.11971827200613916


This is because python interns small tuples (you can see this clearly from the LOAD_CONST instruction above), so the time taken is negligible.





For larger sequences, we see similar behaviour. {..} syntax is faster at constructing sets using set comprehensions as opposed to set() which has to build the set from a generator.



timeit("""set(i for i in range(10000))""", number=1000)
# 0.9775058150407858

timeit("""{i for i in range(10000)}""", number=1000)
# 0.5508635920123197


Interestingly, however, set() is optimised for range:



timeit("""set(range(10000))""", number=1000)
# 0.3746800610097125


This happens to be faster than the set construction. You will see similar behaviour for other sequences (such as lists).



My recommendation would be to use the {...} set comprehension when constructing set literals, and as an alternative to passing a generator comprehension to set(); and instead use set() to convert an existing sequence/iterable to a set.







share|improve this answer














share|improve this answer



share|improve this answer








edited 52 mins ago

























answered 1 hour ago









coldspeed

119k19114192




119k19114192








  • 1




    He is also creating a tuple and then pass it to the set function, does the tuple creation times count?
    – Daniel Mesejo
    1 hour ago






  • 2




    @DanielMesejo Perhaps, but I cannot be sure. In this case, maybe not as I believe python interns (caches) tuples, so after the first couple of timeits that might not cause much of a difference in timings. But in theory, yes, it will contribute.
    – coldspeed
    1 hour ago












  • Whoops! Silly me, that definitely explains it. Editing the question, the first part is still rather intruiging, I'd love a more in depth view of what's happening. Was also wondering about what @DanielMesejo asked.
    – YuvalG
    1 hour ago






  • 1




    @DanielMesejo might be wrong but from the bytecode, it seems it does not create the tuple, it is built as a constant when python code is initially parsed. It just LOAD_CONSTs the tuple. The overhead comes after, building the set from that tuple.
    – spectras
    1 hour ago












  • @spectras You're right, that actually proves the point that the tuple is cached after the initial call.
    – coldspeed
    1 hour ago














  • 1




    He is also creating a tuple and then pass it to the set function, does the tuple creation times count?
    – Daniel Mesejo
    1 hour ago






  • 2




    @DanielMesejo Perhaps, but I cannot be sure. In this case, maybe not as I believe python interns (caches) tuples, so after the first couple of timeits that might not cause much of a difference in timings. But in theory, yes, it will contribute.
    – coldspeed
    1 hour ago












  • Whoops! Silly me, that definitely explains it. Editing the question, the first part is still rather intruiging, I'd love a more in depth view of what's happening. Was also wondering about what @DanielMesejo asked.
    – YuvalG
    1 hour ago






  • 1




    @DanielMesejo might be wrong but from the bytecode, it seems it does not create the tuple, it is built as a constant when python code is initially parsed. It just LOAD_CONSTs the tuple. The overhead comes after, building the set from that tuple.
    – spectras
    1 hour ago












  • @spectras You're right, that actually proves the point that the tuple is cached after the initial call.
    – coldspeed
    1 hour ago








1




1




He is also creating a tuple and then pass it to the set function, does the tuple creation times count?
– Daniel Mesejo
1 hour ago




He is also creating a tuple and then pass it to the set function, does the tuple creation times count?
– Daniel Mesejo
1 hour ago




2




2




@DanielMesejo Perhaps, but I cannot be sure. In this case, maybe not as I believe python interns (caches) tuples, so after the first couple of timeits that might not cause much of a difference in timings. But in theory, yes, it will contribute.
– coldspeed
1 hour ago






@DanielMesejo Perhaps, but I cannot be sure. In this case, maybe not as I believe python interns (caches) tuples, so after the first couple of timeits that might not cause much of a difference in timings. But in theory, yes, it will contribute.
– coldspeed
1 hour ago














Whoops! Silly me, that definitely explains it. Editing the question, the first part is still rather intruiging, I'd love a more in depth view of what's happening. Was also wondering about what @DanielMesejo asked.
– YuvalG
1 hour ago




Whoops! Silly me, that definitely explains it. Editing the question, the first part is still rather intruiging, I'd love a more in depth view of what's happening. Was also wondering about what @DanielMesejo asked.
– YuvalG
1 hour ago




1




1




@DanielMesejo might be wrong but from the bytecode, it seems it does not create the tuple, it is built as a constant when python code is initially parsed. It just LOAD_CONSTs the tuple. The overhead comes after, building the set from that tuple.
– spectras
1 hour ago






@DanielMesejo might be wrong but from the bytecode, it seems it does not create the tuple, it is built as a constant when python code is initially parsed. It just LOAD_CONSTs the tuple. The overhead comes after, building the set from that tuple.
– spectras
1 hour ago














@spectras You're right, that actually proves the point that the tuple is cached after the initial call.
– coldspeed
1 hour ago




@spectras You're right, that actually proves the point that the tuple is cached after the initial call.
– coldspeed
1 hour ago


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53977997%2fpython-performance-comparison-for-set-vs%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Plaza Victoria

In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...