A look at Deep Learning for predicting strain and displacement fields in material grains using computer vision and Digital Image Correlation
Materials manufacturing processes have complex material flow and boundary conditions. Furthermore, quality of the process is critical to component design and the resulting functional performance of the components. Accounting for displacement and strain fields experienced by material grains is of utmost importance to ensuring high quality materials processing.
Digital Image Correlation (DIC) is an established computer vision technique with industrial applications in metrology, semiconductor inspection, materials tensile testing, and more. It has been widely applied in experimental solid mechanics to accurately measure two-dimensional (2D) and three-dimensional (3D) displacement and strain fields in various material systems, including engineering metals, and polymers.
The work discussed in this project is a continuation of the efforts of Professor Ping Guo and Ru Yang at Northwestern University and can be read about here [1]: https://doi.org/10.1016/j.jmatprotec.2021.117474 .
Technical Overview
Current Industry Standard DIC method

Limitations of industry standard is that it requires data from real material tensile testing and this becomes difficult to scale for generality of the solution. However, the science of mechanical and materials engineering and finite element analysis is an established science and can be used as a robust groundtruth to inform a deep learning approach to the same problem.
Proposed Deep Learning with DIC

Extension of Synthetic Dataset Generation Capabilities
Existing training and testing datasets of tensile testing experiments quickly become the bottleneck to scale the development of a displacement & strain field prediction method using deep learning. This project extends the capability of the synthetic image dateset generation algorithm outlined in the deepDIC paper with the addition of randomly rotated speckles, user input of speckle density, randomized generation and propagation of cracks simulating fatigue and failure in the material, and severe deformation of the speckle pattern. All of these added features seek to improve the robustness of the deepDIC models to identify a material’s dynamic condition undergoing a tensile test from start to finish.
Generation of baseline speckle patterned image:
The speckle generation is replicated using methods as described in the paper [1] and reproduced below. The image samples start as a 512×512 empty image array and is filled in with randomly generated elliptical geometry based on pre-selected parameters to satisfy the variables in the standard equation for an ellipse:
While x and y are indices in the image array, h, k, a, and b are randomly generated using a uniform distribution. Further, the angle of ellipse orientation θ, between major axis and the global coordinate system is also generated from a uniform random distribution.


Forming and propagating a crack into the simulated material grain:
As similarly done to generate ellipses geometry with parameter randomly generated, we have modeled the “cracking” of a material as a triangular void forming from the edge of the image and propagating toward the center of the image. To do this, we generate the triangular geometry as two lines with slope m1, m2 originating at the edge of the image array and interesting at some point L distance into the image. Any image pixels contained within the intersecting boundaries are turned to a black pixel, representing a void. The triangular crack voids are generated before image deformation is applied.



Warping of speckle image to simulate stress and strain in the material grain
A 2D displacement field is generated using equations (1) and (2) below to simulate material deformation in translation, rotation, stretch/compression, shear, and localized deformation formulated with 2D Gaussian functions [1]. The coefficients in the equations below are generated using a uniform random distribution.
The resulting warpage in the speckle image looks as below and final image outputs are saved as 256×256 PNG files.

Results and Discussion

