Digital Library

Deep Learning and Physical Models

  • Wiewel, S., Becher, M., Thuerey, N.: Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow, arXiv, submitted to NIPS 2018, 2018
    Abstract: Our work explores methods for the data-driven inference of temporal evolutions of physical functions with deep learning techniques. More specifically, we target fluid flow problems, and we propose a novel LSTM-based approach to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space-time data sets. Key for arriving at a feasible algorithm is a technique for dimensionality reduction based on convolutional neural networks, as well as a special architecture for temporal prediction. We demonstrate that dense 3D+time functions of physics system can be predicted with neural networks, and we arrive at a neural-network based simulation algorithm with significant practical speed-ups. We demonstrate the capabilities of our method with a series of complex liquid simulations, and with a set of single-phase buoyancy simulations. With a set of trained networks, our method is more than two orders of magnitudes faster than a traditional pressure solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.
    [ DOI ] [ RESEARCHGATE ]
    • Take Advantage of sequence-to-sequence networks to learn temporal evolution of physical systems with a focus on fluid flow and smoke simulations
    • Uses Convolutional Neural Networks (CNNs) as approach for dimensionality reduction
    • Develops a hybrid Long Short-Term Memory (LSTM)-CNN architecture that is crucial for inferring high-dimensional outputs of physical simulations
    • Enables a fast simulation time by using deep learning models compared to traditional physical models
    • Works with Leaky Rectified Linear Units (LeakyReLu) in CNN layers
    • Uses for evaluation among PSNR (peak signal-to-noise ratio) also the Surface-based Hausdorff distance
    • Performs first evaluation of accuracy only on the spatial encoding – the autoencoder network in conjunction with a numerical time Integration scheme
    • Trains autoencoders in two days on everage (with pre-training) with Adam optimizer (learning rate 0.001, L2 regularization 0.005); 6 epochs pretraining and 25 epochs of Training; dataset Ratio: 80% Training, 10% Validation, 10% testing
    • Shows error measurements averaged for 10 simulations from the test data set
    • Provides insights that velocity prediction works not so well, but pressure quite well w.r.t. spatial encoding
    • Performs secondly the reconstruction when including temporal prediction networks
    • Uses configuration of 700 units for the first LSTM layer and 1500 units for the second LSTM layer
    • Trains prediction network with parameters dropout rate of 1.32 * 10-2 with a recurrent dropout of 0.385; learning rate 1.26 * 10-4; decay factor of 3.34 * 10-4; 50 epochs with RMSProp; 319600 training samples in each epoch;, 2 hours, on average
    • Selects hyper-parameters with a broad search
    • Achieves a good prediction of future states for the Simulation variables with the prediction network
    • Compares a fully recurrent LSTM with an hybrid alternative and the hybrid architecture outperforms the fully recurrent version in terms of accuracy and also using 8.9m fewer weights
    • Shows that the Network is able to reproduce the complex behaviour of the Underlying physical Simulation like wave forming in fluids
    • Provides insights that the deep learning Network succesfully learned an abstraction of the temporal evolution of flow in fluids and smoke
    • Shows also for significantly different Physics like in smoke the approach successfully predicts the evolution and motion of vortex structures
    • Underestimates pressure values and reduces small-scale motions for smoke (room for improvement)
    • Offers significant speed-ups compared to regular pressure solvers with 155x speed-up compared to a parallelized state-of-the-art iterative solver
    • Indicates a high potential for very fast physics solvers with learned models like the presented LSTM version
    • Achieves for fluid setups generalization to unseen shapes (i.e. anvil) that have been not part of the training data
    • Outlines future work research by improving the autoencoder Network to improve the Quality of temporal predictions
    • Claims that deep neural network architectures can successfully predict the temporal evoluion of dense physical functions
    • Concludes that learned latent spaces with LSTM-CNN hybrid architectures are suitable for fluids and smoke simulations
    • Yields very significant increases in simulation performance compared to traditional models as deep-learning powered Simulation algorithm
    • Outlines more future research to improve accuracy of predictions over performance considerations and using physics predictions as priors for inverse problems

Personalized Medicine

  • Froehlich, H., Balling, R., Beerenwinkel, N., Kohlbacher, O., Kumar, S., Legnauer, T., Maathuis, M.H., Moreau, Y., Murphy, S., Przytycka, T., Rebhan, M., Roest, H., Schuppert, A., Schwab, M., Spang, R., Stekhoven, D., Sun, D., Weber, A., Ziemek, D., Zupan, B.: From hype to reality: data science enabling personalized medicine, BMC Medicine, 16:150, 2018
    Abstract-Background: Personalized, precision, P4, or stratified medicine is understood as a medical approach in which patients are stratified based on their disease subtype, risk, prognosis, or treatment response using specialized diagnostic tests. The key idea is to base medical decisions on individual patient characteristics, including molecular and behavioral biomarkers, rather than on population averages. Personalized medicine is deeply connected to and dependent on data science, specifically machine learning (often named Artificial Intelligence in the mainstream media). While during recent years there has been a lot of enthusiasm about the potential of ‘big data’ and machine learning-based solutions, there exist only few examples that impact current clinical practice. The lack of impact on clinical practice can largely be attributed to insufficient performance of predictive models, difficulties to interpret complex model predictions, and lack of validation via prospective clinical trials that demonstrate a clear benefit compared to the standard of care. In this paper, we review the potential of state-of-the-art data science approaches for personalized medicine, discuss open challenges, and highlight directions that may help to overcome them in the future.
    Abstract-Conclusions: There is a need for an interdisciplinary effort, including data scientists, physicians, patient advocates, regulatory agencies, and health insurance organizations. Partially unrealistic expectations and concerns about data science-based solutions need to be better managed. In parallel, computational methods must advance more to provide direct benefit to clinical practice.
    [ DOI ] [ RESEARCHGATE ]
    • Lists many names for personalized medicine: personalized, precision, P4, or stratified medicine
    • Stratisfies patients based on their disease subtype, risk, prognosis, or treatment response using specialized diagnostic tests
    • Base medical decisions on individual patient characteristics, including molecular and behavioral biomarkers, rather than on population averages
    • Highlights the fact that there exist only few examples that impact current clinical practice
    • Gives reasons such as insufficient performance of predictive models, difficulties to interpret complex model predictions, and lack of Validation via prospective clinical trials that demonstrate a clear benefit compared to the standard of care
    • Reviews potential of state-of-the-art data science approaches for personalized medicine, discuss open challenges, and
      highlight directions
    • Concludes the need for an interdisciplinary effort, including data scientists, physicians, patient advocates,
      regulatory agencies, and health insurance organizations
    • Uses the term biomarker for any measurable quantity or score that can be used as a basis to stratify patients (e.g., genomic alterations, molecular markers, disease severity scores, lifestyle characteristics, etc
    • Lists advantages of personalized medicine such as better medication effectiveness, since treatments are tailored to patient characteristics, e.g., genetic profile
    • Lists advantages of personalized medicine such as reduction of adverse event risks through avoidance of therapies showing no clear positive effect on the disease, while at the same time exhibiting (partially unavoidable) negative side effects
    • Lists Advantages of personalized Medicine such as lower healthcare costs as a consequence of optimized and effective use of therapies
    • Lists Advantages of personalized Medicine such as early disease diagnosis and prevention by using molecular and non-molecular biomarkers
    • Lists Advantages of personalized Medicine such as improved disease management with the help of Wearable sensors and mobile health applications
    • Lists Advantages of personalized Medicine such as smarter design of clinical trials due to selection of likely responders at baseline.