Using data science to prevent Titanic disasters
Want to know how we crunch the data and which questions we help you answer using the power of data science?
Cast your mind back to the Titanic. Imagine you have been tasked with investigating the disaster that befell the world’s most famous ship when it hit an iceberg just before midnight on 14 April 1912. Your main focus is investigating survival probabilities and learning from this disaster so that future fatalities can be prevented. (Think of it as an old version of a modern air crash investigation.)
As an expert in machine learning you've requested two files from your manager. The first file is the training file. This is the file you will use to train your data models. Ideally the training file should contain between 66% and 80% of the available data. The more test data you have, the more accurate your model will be. Below is your training file, based on Wikipedia data and the passenger manifesto.
|... up to 891 records.|
You also receive a test file. This file is used to test the accuracy of your model. Think of it as equivalent to a year-end exam. The results of your test file are like the final exam score you achieve on a specific academic module that you pursued for one academic year at university.
(To be predicted)
|... up to 418 records.|
You register and log in on the 48 hours website. On the Dashboard you use the Dashboard->Add Job link to add the Training File and the Test File to 48hours.ai.
The 48 hours engineers use your data to run different models and to ascertain the best model fit for your data.
Within 48 hours
our data engineers create the following report:
[DOWNLOAD REPORT PDF]
Your manager is very happy with the report and its insights. Based on your report the following rectifications are proposed for future shipping journeys.
- Passengers will not be able to travel without their families on intercontinental routes.
- In the future there will be more lifeboats with enough space to accommodate all passengers on ships.