Abstract

In machine learning process, hyper parameters are chosen in a way to decrease the prediction error and improve the convergence. However, the optimized hyper parameters have a limit in terms of enhancing the performance of the neural networks. In this work, the datasets used for the numerical experiments arise from the resolution of partial differential equations (PDE) defined on a spatial domain. We propose a DYNAmic WEIghted Loss (DYNAWEIL) function-based approach for neural networks that are used to learn these PDE’s solutions. This a two-step process: first we train for a few numbers of epochs in a classical way then the dynamic weighted loss function replaces the classical loss function by leveraging the information from past training error histories. To validate this method, we carry out numerical experiments with different neural networks on datasets arising on two different physics: Goldstein equation [1] and radiative transfer equation [2]. Thus, in order to demonstrate the relevance of this approach, we provide a comparison among a neural network model using a classical loss function, with and without hyper parameters optimization, and a dynamic weighted loss function for both versions.

Full Paper

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top

Document information

Published on 01/07/24
Accepted on 01/07/24
Submitted on 01/07/24

Volume Data Science, Machine Learning and Artificial Intelligence, 2024
DOI: 10.23967/wccm.2024.125
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?