The challenges and datasets were introduced and attendees joined teams for the duration of the weekend. With on the ground support from data scientists, PowerBI experts and subject matter experts they worked to interrogate the data, develop a proof of concept and deliver a solid pitch to support it. Industry experts judged the final proposals and awarded some great prizes too!
Unfortunately we had some sound problems when recording the pitches so we don’t have them all, but check out the pitches we do have and presentations below. They should give you a real flavour of what Project:Hack is all about. We are hoping to add some team blogs in the coming weeks as well, so watch this space!
Challenge #1 – Staff Wellbeing Notifications
Staff wellbeing is extremely important, and travel can add a significant amount of time to a workers day. Teams were tasked with creating an app that will notify a user when they need to begin their journey home, alerting them when their estimated journey time home and current time spent working is about to exceed their specified daily working limit.
Team: Well Working App
Challenges #2 – Unsupervised / Pre-trained Machine Learning with Site Images
Unsupervised machine learning on images is a hot top in Data Science. At Project:Hack3 we had two teams looking to classify labelled images using a supervised machine learning model. This was a huge success but much of this (taking nothing away from the teams, of course) was due to the quality of the labelled data which is very expensive to produce. Teams were therefore tasked with taking this to the next step and producing an algorithm / approach to demonstrate what can be achieved without a labelled data source.
Team: Conpicture Classifier (3rd Place)
Challenge #3 – Site Diary Quality Evaluation Tool
Site diaries are an essential part of a project’s records, and must be maintained in a consistent, proper and clear manner. The entries must be complete, accurate and concise so that anybody can read and understand the entry in context at a later date, and it is therefore important to ensure they are of a high quality.
The task set by SRM therefore was for teams to build a classification model that evaluates the quality of diary entries based on certain attributes and provide feedback to the individual on areas / features where the diary entry performed poorly or could be improved.
Challenge #4 – Error Recognition for Datascope
This task involved creating code to generate flags and look for any anomalies in the Datascope swipe in and swipe out data. Once these had been identified the teams analysed the data to look for insights, for example does a certain error occur more often at a particular time of the day.
Team: The Finger
Challenge #5 – Contract Data Extraction
Contract tender notices are uploaded to TED containing information about a contract. It is possible to download a data dump from the TED website, however this data misses out key data fields. Teams were tasked with downloading a complete dataset in the form of a dashboard using Robotic Process Automation to extract this data. The second part of the challenge involved using the data fields to gain additional insights, such as which contractors perform better in each sector.
Team: Tinder for TEnDers
Challenge #6 – Using Voice Recognition to Report Safety Observations
It is not always possible to type up observations when on site (perhaps because it is raining or the user is wearing gloves). Teams were therefore tasked with using voice recognition technology to take the user through a decision flow to correctly log safety observations.
Team: Safety Steve (1st Place)
Challenge #7 – Return to Monte Carlo
All projects and programmes are inherently risky because they are unique, time-constrained, based on assumptions, performed by people and subject to external factors. These risks can impact time, cost, safety the environment and stakeholders. These risks are identified and documented on a risk register (Xactium). These risks are assessed using both qualitative and quantitative techniques. Using Monte Carlo Analysis teams were tasked with producing an output with a range of percentage confidence for management use.
Team: Monte Carlos Flying Circus (2nd Place)
Challenge #8 – Programme Trend Analysis
All projects are managed using a time schedule (commonly referred to as a sequence of work). The duration of each activity is estimated along with its logical dependencies. Based on past performance teams were tasked with calculating the confidence in achieving key programme performance goals and identifying patterns in activity types which may lead to delays.
Team: After Eight
Challenge #9 – Text Analysis Tool
Users enter comments into an Observation app, however generic tools struggle with construction terms, British English and sarcasm. Teams were asked to analyse the data using and compare this with off-the-shelf sentiment tools to see whether there are any keywords which throw off automated sentiment tools, gleaning a better understanding of observation entries.
None of the teams chose to attempt this challenge.