Sklearn 'Seed' Not Working Properly In a Section of Code [closed] The 2019 Stack Overflow Developer Survey Results Are InPosterior covariance of Normal-Inverse-Wishart not converging properlyLogistic Regression not quite workingWhy is Python's scikit-learn LDA not working correctly and how does it compute LDA via SVD?K-Means Clustering Not Working As ExpcectedEmploying cross_validation to to develop a reasonable linear regression model using scikit learnWhy does sklearn Ridge not accept warm start?Working between sklearn and scipy for convex optimizationPCA principal components in sklearn not matching eigen-vectors of covariance calculated by numpySklearn BaggingRegressor does not work with LightGBMRegressor & MAE objective
What is the meaning of the verb "bear" in this context?
How to type this arrow in math mode?
slides for 30min~1hr skype tenure track application interview
Can we generate random numbers using irrational numbers like π and e?
Reference request: Oldest number theory books with (unsolved) exercises?
Have you ever entered Singapore using a different passport or name?
Deal with toxic manager when you can't quit
Origin of "cooter" meaning "vagina"
Did Scotland spend $250,000 for the slogan "Welcome to Scotland"?
"as much details as you can remember"
Are there any other methods to apply to solving simultaneous equations?
Loose spokes after only a few rides
When should I buy a clipper card after flying to OAK?
Can a rogue use sneak attack with weapons that have the thrown property even if they are not thrown?
What is the accessibility of a package's `Private` context variables?
Why isn't the circumferential light around the M87 black hole's event horizon symmetric?
Interpreting the 2019 New York Reproductive Health Act?
Aging parents with no investments
If a Druid sees an animal’s corpse, can they Wild Shape into that animal?
If I score a critical hit on an 18 or higher, what are my chances of getting a critical hit if I roll 3d20?
Is this app Icon Browser Safe/Legit?
Who coined the term "madman theory"?
Identify boardgame from Big movie
Why didn't the Event Horizon Telescope team mention Sagittarius A*?
Sklearn 'Seed' Not Working Properly In a Section of Code [closed]
The 2019 Stack Overflow Developer Survey Results Are InPosterior covariance of Normal-Inverse-Wishart not converging properlyLogistic Regression not quite workingWhy is Python's scikit-learn LDA not working correctly and how does it compute LDA via SVD?K-Means Clustering Not Working As ExpcectedEmploying cross_validation to to develop a reasonable linear regression model using scikit learnWhy does sklearn Ridge not accept warm start?Working between sklearn and scipy for convex optimizationPCA principal components in sklearn not matching eigen-vectors of covariance calculated by numpySklearn BaggingRegressor does not work with LightGBMRegressor & MAE objective
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
$endgroup$
closed as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
add a comment |
$begingroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
$endgroup$
closed as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
add a comment |
$begingroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
$endgroup$
I have written an ensemble using Scikit Learn VotingClassifier
.
I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.
Here is the code:
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)
The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:
1:
[0.70588235 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]
2:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
3:
[0.76470588 0.94117647 1. 0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]
4:
[0.76470588 0.94117647 1. 0.82352941 1. 0.88235294
0.8125 0.875 0.625 0.875 ]
So it appears my random_state=seed
isn't holding.
What is incorrect?
Thanks in advance.
python scikit-learn ensemble
python scikit-learn ensemble
asked Mar 23 at 15:13
Windstorm1981Windstorm1981
1466
1466
closed as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
closed as off-topic by jbowman, Sycorax, Robert Long, Michael Chernick, mdewey Mar 24 at 11:30
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Sycorax, Robert Long, Michael Chernick, mdewey
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
$begingroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
$endgroup$
Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:
import sklearn
from sklearn.model_selection import KFold, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
import numpy as np
#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
# Data
np.random.seed(seed)
feature_1 = np.random.normal(0, 2, 10000)
feature_2 = np.random.normal(5, 6, 10000)
X_train = np.vstack([feature_1, feature_2]).T
Y_train = np.random.randint(0, 2, 10000).T
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators = []
model1 =LogisticRegression(random_state=seed)
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier(random_state=seed)
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print('sklearn version', sklearn.__version__)
print(results)
Output:
sklearn version 0.19.1
[0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]
edited Mar 23 at 17:49
answered Mar 23 at 17:10
EsmailianEsmailian
42615
42615
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |
$begingroup$
Thanks for your quick reply. Not sure I follow completely.random_state=seed
fixes my cross validation. I note your linenp.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
$begingroup$
Thanks for your quick reply. Not sure I follow completely.
random_state=seed
fixes my cross validation. I note your line np.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
Thanks for your quick reply. Not sure I follow completely.
random_state=seed
fixes my cross validation. I note your line np.random.seed(seed)
. Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?$endgroup$
– Windstorm1981
Mar 23 at 17:34
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
$begingroup$
@Windstorm1981 My bad. Updated.
$endgroup$
– Esmailian
Mar 23 at 17:44
1
1
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
$begingroup$
ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
$endgroup$
– Windstorm1981
Mar 23 at 17:46
1
1
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
$begingroup$
@Windstorm1981 Exactly!
$endgroup$
– Esmailian
Mar 23 at 17:47
add a comment |