11 Comments
User's avatar
Analysis by Comet's avatar

When you bite off more than you can chew, it shows itself when you get to the write up and don't have the word count or the figures to adequately cover the exact points you talk about in this blog post. This was the case with ours, but I think a potentially interesting pursuit nonetheless. Sometimes, just making it happen without fizzling out is a small win in itself.

https://www.kaggle.com/code/jacobmarkmiller/anticipating-play-outcomes

Expand full comment
Ron Yurko's avatar

Great work!

Expand full comment
Udit Ranasaria's avatar

Great article!

I do think your philosophy and points are well received but I also somewhat miss the good old days of practitioners building creative and new methodologies. I am biased of course but there is a part of me that believes there is value inherently in experimenting and showcasing unique/creative methodology even if the raw results themselves are somewhat meaningless (small training set, needs to get reproduced over large samples).

There was a part of us that was frustrated with that lack of creativity for those choosing to use deep learning and the fact that the official NGS team still uses the Zoo arch from 2020. To me seeing a project like CAMO where they go in and find meaningful interpretation of attention weights (a non-trivial task imo) is pretty cool in itself.

Also to your point about 3D graphics... I remember Rishav put together a fantastic one for our PaVE submission 3 years ago. It got so much hype on twitter and I'd be lying if I didn't think it contributed to us getting HM. I can see why people make the investment!

Expand full comment
Ron Yurko's avatar

I didn’t read CAMO’s before this - I thought their use of attention weights was very cool.

And if I remember correctly, I think Rishav used 3D for the ball height right? Which is justified! Some of the 3D plots I’ve seen this year were pointless

Expand full comment
Udit Ranasaria's avatar

Yes we did have part of the modeling involve the ball height. I think the 3d viz we put together maybe was initially because of that but most of it ended up being pizzazz. But ya irrespective I think what you are saying is reasonable

Expand full comment
Ray Carpenter's avatar

This was a really great write-up, thank you. I definitely could apply some of your feedback to next year’s submission of mine

https://www.kaggle.com/code/raymondcarpenter1/receiver-deception-score-rds/notebook

Expand full comment
Conor Malone's avatar

I conformed to your rules at least. Only thing that went right: https://www.kaggle.com/code/connyfromtheblock/decoding-qb-reads-a-markov-model-approach

Expand full comment
Ron Yurko's avatar

Great that you shared this approach though for the public to see, practicing good science!

Expand full comment
Rouven Michels's avatar

Thanks for the summary and the guidelines. I think they help a lot of people to use super awesome ML techniques in a reasonable (!) way...(for writers and readers btw). I admit, our submission might not be the most innovative and fancy, but at least we followed these guidelines (for the most part) and pointed out shortcomings. Also, the approach is modular and can be layered on top of existing models. https://www.kaggle.com/code/rouvenmichels/hmmotion-using-tracking-data-to-predict-coverage

Expand full comment
Ron Yurko's avatar

This is great - thanks for sharing

Expand full comment