You are certainly correct that deep learning, ML in any of its current forms, is not creative. But that is an important point. It is a means of research that determines patterns and can provide the best means to achieve objectives based on very large data samples. This is new and, as a result, is confusing people who want to understand, i.e. audit, the process leading to the conclusions reached.
That is because we have developed a scientific process that starts with assumptions (hypothesis), determines what existing laws or theories would suggest is the result and then empirically test for those answers. It doesn’t work that way now.
The shortcut is the ability of ML systems to maintain vast arrays of data points and test for the desired patterns. Usually this is done as A versus B to achieve C. This does not use metaphorical laws to model the logical process. In fact there is no hypothesis generated and no logical process outside of the direct comparison of all available data points. Unless you can replicate the simultaneous comparison of a several million data points you cannot replicate the “logic” of the process.
This has been the problem in using ML for sentencing recommendations in courts. There is no summarizing sequence of points that equals the conclusion given.
This is what makes the recovery of human style summarized or metaphorical assumptions often impossible to find. And the process is raw power, modeling vast arrays of data. We just don’t have the capabilities to think like this. The realization of that requires that we acknowledge our limits and then determine what it takes to confirm ML based conclusions as not invalid. This is a different process. We need to test for things that matter to us such as bias in the data selection, human well being versus efficiency of the outcome, etc.
That is a human and creative process.