A friend just posted a summary of an exciting hacking day at the office. For me, it was quite a challenge not to be hacking away but acting as another organizer and judge.
Making the Hackathon interesting to different types of attendees is a challenge on its own – leave alone the logistics. So many details were worked out in planning meetings some weeks before the event. The team at VPC took care of the logistics, and the actual judgment and qualification of each project were left to the judges’ team.
A primer on counting the uncountable matter. How do you rate innovation? Collaboration? How do you quantify the social impact? These were questions we faced when tasked with designing the scoring and a system to reduce the impact of (unavoidable) bias and subjectivity.
A good read we tried was Laurenellen McCann‘s “So You Think You Want to Run a Hackathon? Think Again“. It was a really helpful reading in that we were able to revisit our goals with the Hackathon. Was it a community-building event? was it a learning event? a bit of both, it turns out.
Then a friend came up with clever ideas on how to measure it by keeping it tied to those refined goals. “a team gets 0 points if there’s already some project which achieved the same goal they’re aiming at achieving. In turn, they’ll get 1 point if, despite an existing project solves the target problem, the team solves it in a faster, better, clearer way than the original implementors… and so on” – thus, we’re assigning a value scale to a series of incremental milestones that a project can meet under certain categories.
And how can all the judges complete the task of quantifying each team under each category, especially when not all judges have the same expertise/experience? Well, that requires team work and coordination. Let’s say, there’s a psychologist in the judges’ team and the category is “learning”. They can feel more comfortable with annotating on a behavior after interviewing a team’s member. Instead, an evaluation of “MVP” can be a bit more challenging for the same judge.
By keeping these things in mind, the judges’ team can prepare examples and train the staff ahead so real-life situations may not be so surprising. It’s not a silver bullet by no means, but it does give all judges a sense of preparedness that allows focusing on the important.
How about bringing some closure, after the winners were chosen and awarded? A closure is important (some more structured events can leverage closure with relative ease). As a matter of fact, we’re a few days from meeting with the teams to thank them, but also for socializing the lessons learned with the greater community of the company. We want to highlight the categories in which some projects shone brightly and a bit of insight on why we chose the ones who ended with the prize. And last but not least, we want to gather the feedback from both participants and spectators.
This was an enriching experience from which I learned a lot. I can’t help thinking about signing in for the next event with a couple of ideas I have in my backlog. But being with the organizer’s team taught me a lot about respect, transparency (it wasn’t bad at all, but we can do better!) and why not, also patience.