Description
The Laser-Induced Breakdown Spectroscopy (LIBS) technique is widely used to measure the concentration of elements in different types of samples [1, 2]. This study was established to investigate and compare the performance of two approaches, classical regression using Partial Least Square (PLS) and Deep Learning (DL), in predicting the concentration of 24 elements from LIBS spectra. The main challenge for developing predictive models was the variation of electron density and temperature of the plasma, which can completely modify the spectra. Therefore, besides PLS, we tried implementing more advanced tools such as CNNs.
The study used the training set of 20000 simulated LIBS spectra and 5000 simulated LIBS spectra as the test set.
To develop the models, a pre-processing step was conducted to normalize the data to the (0,1) range. However, the models were also trained and tested with the original data (without normalization) to make the study more comprehensive. For DL, a simple Convolutional Neural Network (CNN) with six convolutional layers was designed. The performance of the models was evaluated based on their stability and accuracy in predicting the concentration of the 24 elements within the test set.
Our findings suggest that DL outperformed classical regression in predicting the concentration of presented elements within the simulated test LIBS spectra. The DL model showed greater stability and higher accuracy in predicting concentrations of elements. Overall, this study provides important insight into the application of DL in LIBS analysis as a powerful and stable tool for accurate and reliable elemental analysis