Reverse design of broadband sound absorption structure based on deep learning method

0
Reverse design of broadband sound absorption structure based on deep learning method

Generation of data sets

The dataset plays a crucial role in training and performance of deep learning models11. Effective training of neural networks relies heavily on sufficient data, and the volume directly impacts the model performance. A large dataset enhances the model’s generalization, leading to robustness and accuracy in real-world problem-solving scenarios. Therefore, data collection stands as a primary factor influencing the efficacy of neural network training. However, acquiring a substantial amount of data through experiments or simulations can be challenging. To quickly obtain a large-scale dataset, we applied the analytical formula from the previous section to generate data. Initially, the critical parameter ranges of the composite sound-absorbing material were determined. After analysis and verification, the primary parameter ranges for the composite structure were established as follows: micropore diameters ranging from 0.1 mm to 1 mm, thin plate thicknesses from 0.1 mm to 1 mm, perforation rates from 0.01 to 0.1, and SAC thicknesses from 1 mm to 100 mm. To ensure robust generalization capabilities of the neural network model post-training, a program was developed to randomly sample within the parameter space of each variable. In total, 60,000 data samples were generated. Among these, 80% were allocated to the training set and 20% to the test set30. This approach was adopted to swiftly acquire a substantial dataset for robust training and accurate evaluation of the neural network model performance.

Through the entire training approach of the neural network and the establishment of the dataset, it becomes evident that the structural parameter design of composite sound-absorbing materials is essentially a multi-input, multi-output regression problem. With the dataset developed above, the initial stage of neural network training has been successfully completed. With such a large-scale dataset, the well-constructed neural network can effectively utilize its advantages in fitting capability, thereby facilitating effective design of structural parameters for composite sound-absorbing materials.

Data preprocessing

During the neural network training process, the features of structural parameter data exhibit varying scales. Direct computation using raw data may be susceptible to scale differences, complicating the model training process and hindering convergence. Normalization of the data aids in improving the efficiency and accuracy of model training by mitigating the impact of scale differences on model learning and prediction, thereby enhancing data interpretability31. In the forward network, the absorption coefficients at different frequency points share a uniform range (0, 1). However, different structural parameters span distinct value ranges. Therefore, for the inverse network tasked with predicting structural parameters, data preprocessing methods are essential. This study adopts the Min-Max normalization technique, which scales all raw input data to the range of (0, 1), as illustrated32:

$$x=\frac{x_0 – x_\hboxmin }{x_\hboxmax – x_\hboxmin }$$

(10)

This normalization approach ensures consistency in data handling and facilitates robust neural network training suitable for optimizing the design of composite sound-absorbing materials.

Forward network model

Firstly, a function was defined to construct the forward neural network model. This function takes parameters including the input dimension, the number of units in the hidden layers, and the output dimension. Within the function, an input layer is first created, followed by the definition of hidden layers using dense layers. Given that the design of structural parameters for composite sound-absorbing materials involves nonlinear relationships, the rectified linear unit (ReLU) activation function is employed to enhance the network’s capacity for nonlinear fitting33. Subsequently, an output layer is defined using a linear activation function. The model function from TensorFlow is utilized to connect the input and output layers, thereby creating a complete model. The mean squared error (MSE) serves as the loss function, quantifying the discrepancy between predicted and actual values. A lower loss indicates better predictive performance and robustness of the model. The Adam optimizer is employed to dynamically adjust gradients during training, ensuring efficient convergence of the neural network. Before training commences, the weights and biases of each layer in the neural network are initialized randomly using a Truncated Normal distribution with a standard deviation of 0.1. To balance fitting accuracy and computational efficiency, a learning rate of 0.001 is set. During training, the stochastic gradient descent method is employed with mini-batches of size 32, dividing the dataset of 60,000 samples into smaller batches. With these parameters configured, the construction of the forward prediction network is complete. The subsequent steps involve loading the previously established dataset and utilizing Python programming within the Jupyter Notebook environment to train this dataset using the TensorFlow framework.

The forward prediction network maps structural parameters to absorption coefficient curves, achieved through a general fully connected neural network structure. To optimize fitting while minimizing computational complexity, the forward prediction network for composite structures comprises three hidden layers. The network architecture consists of four layers with neuron counts sequentially set as 64, 128, 512, and 256. The input layer has 8 neurons, as shown in Fig. 3b, c. Given that the absorption coefficient curve data, serving as output labels, is discretized into 256 points, the neural network at this layer has 256 neurons, each representing absorption coefficients at frequencies from 0 to 5120 Hz.

Fig. 3
figure 3

Deep neural network framework model. (a) Reverse network structure model. (b) Forward network structure model. (c) Network settings in the decoder. (d) Network settings in the encoder.

After 200 iterations, the feedforward prediction network achieved excellent convergence, with a final MSE less than 0.0001. As shown in Fig. 4a, the loss curves of the neural network on the training and test sets indicate rapid convergence and smooth curves, suggesting a smooth training process without significant fluctuations. Analysis of the average errors on the training and test sets yielded results of 0.445% and 0.476%, respectively. The small average errors imply minimal differences between predicted and actual values, confirming the accuracy of the deep learning model. Additionally, several randomly selected sets of absorption coefficient curves from the test set were subjected to an R-squared (R2) test, as depicted in Fig. 4b, resulting in a score of 0.96.

\(\textR^2=1 – \fracSS_resSS_tot\)\(SS_res\) is the sum of squared residuals, which is the sum of the squares of the differences between the predicted values and the actual values. \(SS_tot\) is the total sum of squares, which is the sum of the squares of the differences between the actual values and their mean.

Fig. 4
figure 4

(a) The loss function on the training set and the test set in the forward network. (b) The R2 test between the predicted and analytical values in the forward network.

To validate the effectiveness of the forward prediction network, case studies are conducted to verify the accuracy of the neural network. Parameters for composite sound-absorbing materials are randomly set within the parameter space. Subsequently, this dataset is sequentially inputted into the forward prediction network. The predicted absorption coefficient curves generated by the neural network are then compared against those obtained through analytical solutions. The results are depicted in Fig. 5a–d.

Fig. 5
figure 5

(ad) The sound absorption coefficients predicted by the four groups of random parameters in the forward network are compared with the analyzed sound absorption coefficients. (eh) The error between the predicted values and the analyzed values.

The figure depicts four randomly selected instances where the neural network predicted typical scenarios under different parameters. It could be observed that the absorption coefficient curves predicted by the neural network exhibit similar characteristics to those obtained from analytical solutions in terms of the frequency, width, and magnitude of absorption peaks, showing a close alignment. Moreover, the error between the predicted and analytically derived absorption coefficients is calculated to be within 5%, as shown in Fig. 5e, h. From this, it could be concluded that the trained forward network possesses the capability to predict absorption coefficient curves. This network could potentially replace the process of calculating absorption coefficient curves based on structural parameters. Therefore, the decoder component of the deep learning model for structural parameter design of MPP-SAC has been successfully developed.

Reverse network model

During the process of reverse engineering, a common issue arises known as “non-uniqueness,” where different sets of structural parameters exhibit similar characteristics34. This phenomenon often leads to oscillations during neural network training and affects the convergence of the cost function, thereby compromising the fitting performance of the neural network. Additionally, the structural parameters obtained from reverse engineering typically contain errors, further contributing to suboptimal performance.

To address the aforementioned issues, a deep learning model resembling an auto-encoder was constructed for structural parameter design35. The neural network was divided into two parts: a reverse network and a forward network. Reverse engineering is deemed more crucial and challenging than forward prediction. The architecture of the deep learning neural network model for reverse design is roughly opposite to that of the forward prediction network. MSE served as the loss function, and Adam optimization algorithm was employed to dynamically adjust the model gradients. Training the reverse design network proves challenging as it needs to integrate with the forward prediction network, which could impact the training of the reverse network to some extent. Therefore, expanding the depth and width of the network was implemented to ensure effective training. In the structural parameter design, the reverse design network comprises four hidden layers. To balance fitting accuracy and training speed, the number of neurons in each layer was set as follows: 512 neurons in the first layer, 256 in the second, 128 in the third, and 64 in the fourth, with 256 neurons in the input layer, as depicted in Fig. 3a, d.

Fig. 6
figure 6

(a) The loss function on the training and test sets in the reverse network. (b) The R2 test between the output and input values in the reverse network.

After 400 iterations of training, the loss function curves on the training and test sets are depicted in Fig. 6a. A favorable convergence was achieved around 300 iterations, with a final MSE of less than 0.0001. Analysis of the average errors on the training and test sets yielded results of 0.445% and 0.476%, respectively. To further validate the model’s accuracy, several sets of random samples from the entire test set were chosen to evaluate the absorption coefficient curves using an R2 test, as shown in Fig. 6b. The final R2 value obtained was 0.97.

To validate the trained class of autoencoder deep learning model’s ability to encode and reconstruct input absorption coefficient curves, several randomly selected curves from the test set were fed into the neural network. The resulting output curves were then compared with the input curves, as depicted in Fig. 7a–d. From the figures, it is evident that within the frequency range of 5120 Hz, the input curves retain their original features after processing through the neural network. The output results closely match the original curves, demonstrating a high level of consistency. By calculating the error between the input and output absorption coefficients, it was found that the errors are all below 5%, as shown in Fig. 7e–h. Therefore, it could be concluded that the neural network obtained after training has acquired the capability for inverse design. Consequently, the entire deep learning model has now been fully completed.

Fig. 7
figure 7

(ad) The sound absorption coefficients output in the reverse network are compared with the input sound absorption coefficients. (eh) The error between the output values and the input values in the reverse network.

link

Leave a Reply

Your email address will not be published. Required fields are marked *