Hi all, since this competition is just for practice, I was wondering if anyone wanted to spitball ideas together or potentially group up? I’ve gotten to around 0.74 accuracy with just a lgbm, basically zero feature engineering, and super minimal finetuning, so I’m sure there’s a lot more to be done!
Hi there, what about your auc value ?
Hi, it’s 0.8065. What’s yours like?
Hi, my results:
Multi-class area under the curve: 0.8706 but the f1-score values are strange:
precision recall f1
low 0.4165427 0.4153985 0.4159698
medium 0.6583504 0.6610639 0.6597044
high 0.5399637 0.5366671 0.5383103
low is representing the level 1
medium is representing the level 2
high is representing the level 3
Hi Natalie, I am trying to use the LightGBM ranker but I am having a hard time with the group data field and eval_group data. Do you have any idea on how to generate it.
Hi Natalie-w, using random forest and some minor feature engineering like target encoding, I reached an F1 score of 0.7462. Are you still looking to team up?
Hi Natalie,
Saw your post i have used xGBoost and find a score of 0.7468 without any feature engg. and model tuning. I was wondering if you are still interested in sharing some ideas.
Thanks
Hi Ms. Natalie,
I have used LightGBM with no feature engineering and no hyper-parameter tuning(used the values that usually give me good results) whatsoever. And didn’t even treat this as a imbalanced classification problem and still got score of 0.7449. I am interested in more ideas.
Hello,
That’s interesting. I’m using XGBoost as well but with fine tuning I’ve achieved 0.7427. What have you done differently? I’m treating all variables except five as categorical.
Hi all,
With the cleanest lgbm implementation I only get to f1_score=0.70
By cleanest I mean no hyperparam fine tuning, just transformation of categorical variables into ‘categorical’, and scaling of numerical variables: nothing else.
You guys seem to be around f1_score=0.74. What is this difference based on? If anyone out there wants to clarify it to me, that would be appreciated – I’m not trying to win anything, just trying to get to the bottom of this kind of stuff.
Thanks.