This case is about predictive maintenance using AI templates in Azure; specifically:
1 Parallelization of the model build up (especially for model training, but also for all time consuming operations such as table joins etc)
(1.1) how to parallelize the model described at:
https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/scenario-deep-learning-for-predictive-maintenance;
(1.2) how to parallelize the model descrbed at:
https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/cortana-analytics-playbook-predictive-maintenance#data-preparation;
2 Data Augmentation
The data used for the preceeding exercise are probably fairly small and hence unable to demonstrate parallelization benefits;
Is it possible /how to use data augmentation or other techniques in order to have a meaningful benchmark dataset for parallel computing
3 Extract the data set used for these applications: how?
4 Deploy the DSVM content in a docker container
Is this possible, and how.