nohup.out 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252
  1. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  2. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  3. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  4. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  5. stage 1: time cost : 0.12420 sec
  6. stage 2: time cost : 0.12090 sec
  7. stage 3: time cost : 0.12970 sec
  8. stage 4: time cost : 0.13302 sec
  9. stage 1: time cost : 0.00954 sec
  10. stage 2: time cost : 0.00869 sec
  11. stage 3: time cost : 0.00879 sec
  12. stage 4: time cost : 0.00902 sec
  13. value feature generate successfully
  14. videoid feature generate successfully
  15. lenth tag_dict: 47609
  16. tag tfidf feature generate successfully
  17. tag dimension: 59393
  18. lenth words_dict: 189467
  19. tag tfidf feature generate successfully
  20. words dimension: 59390
  21. folds 0
  22. Training until validation scores don't improve for 200 rounds
  23. [100] training's rmse: 0.226424 valid_1's rmse: 0.225884
  24. [200] training's rmse: 0.0973668 valid_1's rmse: 0.0971656
  25. [300] training's rmse: 0.057936 valid_1's rmse: 0.0586671
  26. [400] training's rmse: 0.0475353 valid_1's rmse: 0.0492602
  27. [500] training's rmse: 0.0447344 valid_1's rmse: 0.0471132
  28. [600] training's rmse: 0.0435196 valid_1's rmse: 0.0465598
  29. [700] training's rmse: 0.0426755 valid_1's rmse: 0.0462999
  30. [800] training's rmse: 0.0419673 valid_1's rmse: 0.0461619
  31. [900] training's rmse: 0.0413599 valid_1's rmse: 0.0461027
  32. [1000] training's rmse: 0.0408172 valid_1's rmse: 0.0460891
  33. [1100] training's rmse: 0.0403146 valid_1's rmse: 0.0461019
  34. Early stopping, best iteration is:
  35. [977] training's rmse: 0.0409576 valid_1's rmse: 0.046076
  36. folds 1
  37. Training until validation scores don't improve for 200 rounds
  38. [100] training's rmse: 0.226544 valid_1's rmse: 0.225426
  39. [200] training's rmse: 0.0971282 valid_1's rmse: 0.0981563
  40. [300] training's rmse: 0.0573839 valid_1's rmse: 0.0604765
  41. [400] training's rmse: 0.0468133 valid_1's rmse: 0.0513215
  42. [500] training's rmse: 0.04392 valid_1's rmse: 0.0493315
  43. [600] training's rmse: 0.0426995 valid_1's rmse: 0.0488075
  44. [700] training's rmse: 0.0418527 valid_1's rmse: 0.0485998
  45. [800] training's rmse: 0.0411682 valid_1's rmse: 0.0485575
  46. [900] training's rmse: 0.040566 valid_1's rmse: 0.048484
  47. [1000] training's rmse: 0.0400298 valid_1's rmse: 0.0484444
  48. [1100] training's rmse: 0.0395374 valid_1's rmse: 0.0484197
  49. [1200] training's rmse: 0.0390424 valid_1's rmse: 0.0484038
  50. [1300] training's rmse: 0.0385941 valid_1's rmse: 0.0483951
  51. [1400] training's rmse: 0.0381847 valid_1's rmse: 0.048402
  52. [1500] training's rmse: 0.0377891 valid_1's rmse: 0.048371
  53. [1600] training's rmse: 0.037364 valid_1's rmse: 0.0483954
  54. [1700] training's rmse: 0.0370085 valid_1's rmse: 0.0483913
  55. Early stopping, best iteration is:
  56. [1502] training's rmse: 0.0377785 valid_1's rmse: 0.0483691
  57. folds 2
  58. Training until validation scores don't improve for 200 rounds
  59. [100] training's rmse: 0.227932 valid_1's rmse: 0.229577
  60. [200] training's rmse: 0.100058 valid_1's rmse: 0.101833
  61. [300] training's rmse: 0.0584845 valid_1's rmse: 0.060349
  62. [400] training's rmse: 0.0480128 valid_1's rmse: 0.050103
  63. [500] training's rmse: 0.0451386 valid_1's rmse: 0.0474865
  64. [600] training's rmse: 0.0438463 valid_1's rmse: 0.0467042
  65. [700] training's rmse: 0.0430083 valid_1's rmse: 0.0463218
  66. [800] training's rmse: 0.0423034 valid_1's rmse: 0.0461414
  67. [900] training's rmse: 0.0416902 valid_1's rmse: 0.0460446
  68. [1000] training's rmse: 0.0411211 valid_1's rmse: 0.0460113
  69. [1100] training's rmse: 0.0406169 valid_1's rmse: 0.0459762
  70. [1200] training's rmse: 0.0401204 valid_1's rmse: 0.0459549
  71. [1300] training's rmse: 0.0396321 valid_1's rmse: 0.0459738
  72. [1400] training's rmse: 0.0392071 valid_1's rmse: 0.0459784
  73. Early stopping, best iteration is:
  74. [1206] training's rmse: 0.0400926 valid_1's rmse: 0.0459515
  75. folds 3
  76. Training until validation scores don't improve for 200 rounds
  77. [100] training's rmse: 0.226307 valid_1's rmse: 0.227556
  78. [200] training's rmse: 0.0974512 valid_1's rmse: 0.098307
  79. [300] training's rmse: 0.0581506 valid_1's rmse: 0.0590657
  80. [400] training's rmse: 0.047742 valid_1's rmse: 0.048951
  81. [500] training's rmse: 0.0448986 valid_1's rmse: 0.0465751
  82. [600] training's rmse: 0.0436506 valid_1's rmse: 0.0459655
  83. [700] training's rmse: 0.0427668 valid_1's rmse: 0.0457833
  84. [800] training's rmse: 0.0421014 valid_1's rmse: 0.0456473
  85. [900] training's rmse: 0.0414596 valid_1's rmse: 0.0455921
  86. [1000] training's rmse: 0.0408817 valid_1's rmse: 0.0455654
  87. [1100] training's rmse: 0.0403698 valid_1's rmse: 0.0455323
  88. [1200] training's rmse: 0.0398803 valid_1's rmse: 0.0455336
  89. [1300] training's rmse: 0.0394193 valid_1's rmse: 0.0455275
  90. Early stopping, best iteration is:
  91. [1164] training's rmse: 0.0400544 valid_1's rmse: 0.0455123
  92. oof_rmse: 0.046490504968166625
  93. oof_mse: 0.0021613670521951254
  94. test_rmse: 0.10498955445882613
  95. test_mse: 0.011022806545462817
  96. oof_mape: [0.04896782]
  97. test_mape: [0.13560733]
  98. verification r2: 0.99351663488233
  99. test r2: 0.9630887068109539
  100. regre ranking shape (27116, 2)
  101. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  102. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  103. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  104. [LightGBM] [Warning] min_data_in_leaf is set=60, min_child_samples=30 will be ignored. Current value: min_data_in_leaf=60
  105. 20210919 rov_feature_add_v1 feature table finish
  106. 20210918 rov_feature_add_v1 feature table finish
  107. 20210917 rov_feature_add_v1 feature table finish
  108. 20210916 rov_feature_add_v1 feature table finish
  109. 20210915 rov_feature_add_v1 feature table finish
  110. 20210914 rov_feature_add_v1 feature table finish
  111. 20210913 rov_feature_add_v1 feature table finish
  112. 20210912 rov_feature_add_v1 feature table finish
  113. 20210911 rov_feature_add_v1 feature table finish
  114. 20210910 rov_feature_add_v1 feature table finish
  115. 20210909 rov_feature_add_v1 feature table finish
  116. 20210908 rov_feature_add_v1 feature table finish
  117. 20210907 rov_feature_add_v1 feature table finish
  118. 20210906 rov_feature_add_v1 feature table finish
  119. 20210905 rov_feature_add_v1 feature table finish
  120. 20210904 rov_feature_add_v1 feature table finish
  121. 20210903 rov_feature_add_v1 feature table finish
  122. 20210902 rov_feature_add_v1 feature table finish
  123. 20210901 rov_feature_add_v1 feature table finish
  124. 20210831 rov_feature_add_v1 feature table finish
  125. 20210830 rov_feature_add_v1 feature table finish
  126. 20210829 rov_feature_add_v1 feature table finish
  127. 20210828 rov_feature_add_v1 feature table finish
  128. 20210827 rov_feature_add_v1 feature table finish
  129. 20210826 rov_feature_add_v1 feature table finish
  130. 20210825 rov_feature_add_v1 feature table finish
  131. 20210824 rov_feature_add_v1 feature table finish
  132. 20210823 rov_feature_add_v1 feature table finish
  133. 20210822 rov_feature_add_v1 feature table finish
  134. 20210821 rov_feature_add_v1 feature table finish
  135. 20210925 rov_predict_table_add_v1 feature table finish
  136. stage 1: time cost : 0.07594 sec
  137. stage 2: time cost : 0.07060 sec
  138. stage 3: time cost : 0.08065 sec
  139. stage 4: time cost : 0.08395 sec
  140. stage 1: time cost : 0.00955 sec
  141. stage 2: time cost : 0.00875 sec
  142. stage 3: time cost : 0.00866 sec
  143. stage 4: time cost : 0.00883 sec
  144. 151
  145. 121
  146. (776799, 121)
  147. 151
  148. 121
  149. (21060, 121)
  150. 0 5.953243
  151. 1 6.812345
  152. 2 4.634729
  153. 3 4.990433
  154. 4 3.931826
  155. ...
  156. 21055 0.000000
  157. 21056 0.000000
  158. 21057 0.000000
  159. 21058 0.000000
  160. 21059 0.000000
  161. Name: weighted_retrn_log, Length: 21060, dtype: float64
  162. value feature generate successfully
  163. videoid feature generate successfully
  164. lenth tag_dict: 47609
  165. tag tfidf feature generate successfully
  166. tag dimension: 59393
  167. lenth words_dict: 189467
  168. tag tfidf feature generate successfully
  169. words dimension: 59390
  170. folds 0
  171. Training until validation scores don't improve for 200 rounds
  172. [100] training's rmse: 0.217296 valid_1's rmse: 0.219055
  173. [200] training's rmse: 0.0933997 valid_1's rmse: 0.0957921
  174. [300] training's rmse: 0.054594 valid_1's rmse: 0.0577442
  175. [400] training's rmse: 0.044312 valid_1's rmse: 0.0479771
  176. [500] training's rmse: 0.0415086 valid_1's rmse: 0.0457105
  177. [600] training's rmse: 0.0402481 valid_1's rmse: 0.0450285
  178. [700] training's rmse: 0.0393805 valid_1's rmse: 0.0447715
  179. [800] training's rmse: 0.0386728 valid_1's rmse: 0.0446799
  180. [900] training's rmse: 0.0380495 valid_1's rmse: 0.0446337
  181. [1000] training's rmse: 0.0374717 valid_1's rmse: 0.0446179
  182. [1100] training's rmse: 0.0369606 valid_1's rmse: 0.0445955
  183. [1200] training's rmse: 0.0364684 valid_1's rmse: 0.0445982
  184. [1300] training's rmse: 0.0359899 valid_1's rmse: 0.0446047
  185. [1400] training's rmse: 0.0355138 valid_1's rmse: 0.0446262
  186. Early stopping, best iteration is:
  187. [1220] training's rmse: 0.0363808 valid_1's rmse: 0.0445906
  188. folds 1
  189. Training until validation scores don't improve for 200 rounds
  190. [100] training's rmse: 0.215789 valid_1's rmse: 0.214513
  191. [200] training's rmse: 0.0931099 valid_1's rmse: 0.092297
  192. [300] training's rmse: 0.0546024 valid_1's rmse: 0.0552234
  193. [400] training's rmse: 0.0443371 valid_1's rmse: 0.0464082
  194. [500] training's rmse: 0.0414791 valid_1's rmse: 0.0446326
  195. [600] training's rmse: 0.0402743 valid_1's rmse: 0.0442435
  196. [700] training's rmse: 0.0394273 valid_1's rmse: 0.0440974
  197. [800] training's rmse: 0.0387161 valid_1's rmse: 0.0440401
  198. [900] training's rmse: 0.0380969 valid_1's rmse: 0.044004
  199. [1000] training's rmse: 0.0375615 valid_1's rmse: 0.0439732
  200. [1100] training's rmse: 0.0370498 valid_1's rmse: 0.0439087
  201. [1200] training's rmse: 0.0365759 valid_1's rmse: 0.0438935
  202. [1300] training's rmse: 0.036118 valid_1's rmse: 0.0439083
  203. Early stopping, best iteration is:
  204. [1172] training's rmse: 0.036709 valid_1's rmse: 0.0438807
  205. folds 2
  206. Training until validation scores don't improve for 200 rounds
  207. [100] training's rmse: 0.220769 valid_1's rmse: 0.220882
  208. [200] training's rmse: 0.0971585 valid_1's rmse: 0.0977065
  209. [300] training's rmse: 0.0570418 valid_1's rmse: 0.0587242
  210. [400] training's rmse: 0.04556 valid_1's rmse: 0.0483838
  211. [500] training's rmse: 0.0421882 valid_1's rmse: 0.0457825
  212. [600] training's rmse: 0.0408069 valid_1's rmse: 0.0449982
  213. [700] training's rmse: 0.0398654 valid_1's rmse: 0.0446595
  214. [800] training's rmse: 0.0391188 valid_1's rmse: 0.0444782
  215. [900] training's rmse: 0.0384418 valid_1's rmse: 0.0443802
  216. [1000] training's rmse: 0.0378518 valid_1's rmse: 0.0443594
  217. [1100] training's rmse: 0.037322 valid_1's rmse: 0.0443377
  218. [1200] training's rmse: 0.036808 valid_1's rmse: 0.0443101
  219. [1300] training's rmse: 0.0363635 valid_1's rmse: 0.0442829
  220. [1400] training's rmse: 0.0359017 valid_1's rmse: 0.0442798
  221. [1500] training's rmse: 0.0354922 valid_1's rmse: 0.0443043
  222. Early stopping, best iteration is:
  223. [1324] training's rmse: 0.036253 valid_1's rmse: 0.0442693
  224. folds 3
  225. Training until validation scores don't improve for 200 rounds
  226. [100] training's rmse: 0.215786 valid_1's rmse: 0.216227
  227. [200] training's rmse: 0.0932123 valid_1's rmse: 0.094359
  228. [300] training's rmse: 0.0548723 valid_1's rmse: 0.0562601
  229. [400] training's rmse: 0.0448026 valid_1's rmse: 0.0462832
  230. [500] training's rmse: 0.0420321 valid_1's rmse: 0.0437981
  231. [600] training's rmse: 0.0408233 valid_1's rmse: 0.0429686
  232. [700] training's rmse: 0.0400402 valid_1's rmse: 0.042586
  233. [800] training's rmse: 0.0393215 valid_1's rmse: 0.0424025
  234. [900] training's rmse: 0.0386931 valid_1's rmse: 0.0422996
  235. [1000] training's rmse: 0.0381022 valid_1's rmse: 0.0422256
  236. [1100] training's rmse: 0.0375841 valid_1's rmse: 0.0421743
  237. [1200] training's rmse: 0.0370754 valid_1's rmse: 0.042115
  238. [1300] training's rmse: 0.0365871 valid_1's rmse: 0.0420674
  239. [1400] training's rmse: 0.0361483 valid_1's rmse: 0.0420477
  240. [1500] training's rmse: 0.0356979 valid_1's rmse: 0.0420595
  241. [1600] training's rmse: 0.0352963 valid_1's rmse: 0.0420646
  242. Early stopping, best iteration is:
  243. [1430] training's rmse: 0.0360223 valid_1's rmse: 0.042041
  244. oof_rmse: 0.04370656860219149
  245. oof_mse: 0.0019102641389780713
  246. test_rmse: 0.030188856957077185
  247. test_mse: 0.0009113670843748676
  248. oof_mape: [0.04676871]
  249. test_mape: [0.13009724]
  250. verification r2: 0.9937893006007936
  251. test r2: 0.9634331570023774
  252. regre ranking shape (21060, 2)