Sklearn 'Seed' Not Working Properly In a Section of Code [on hold]
$begingroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
$endgroup$
put on hold as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
$begingroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
$endgroup$
put on hold as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
$begingroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
$endgroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
python scikit-learn ensemble
asked Mar 23 at 15:13
Windstorm1981Windstorm1981
1415
1415
put on hold as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
If this question can be reworded to fit the rules in the help center, please edit the question.
put on hold as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
edited Mar 23 at 17:49
answered Mar 23 at 17:10
EsmailianEsmailian
35115
35115
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
$begingroup$
Thanks for your quick reply. Not sure I follow completely.
random_state=seed
fixes my cross validation. I note your line np.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
Thanks for your quick reply. Not sure I follow completely.
random_state=seed
fixes my cross validation. I note your line np.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |