Nearly a century ago Joseph Schumpeter coined the expression »creative destruction« to describe the way in which new innovations drove older enterprises out of the market. For some, it meant a kind of rejuvenating spa treatment and modernization of the economy, while for others it spelled job losses and insolvency. No wonder, then, that similarly heated disputes recently have erupted over artificial intelligence (AI) as well. After all, hasn’t it penetrated every sphere of society with impressive speed? And isn’t it expected to be the next great fundamental innovation of the twenty-first century?
Concern is rising, particularly in scientific, creative, and artistic circles, that human labor power might become redundant or even be replaced entirely. But at the same time the technology behind AI is the outcome of innovative scientific procedures. So how should the relationship between AI and science be conceived? Will the »parents« soon fall victim to their own »child«?
That questions can be answered by offering a counterargument. The parents will be needed more than ever to watch over their child’s development and not let it get onto the »wrong track.« To put it differently, in the age of AI science is more important than ever, serving as a vital corrective by identifying potential dangers.
The concept of »intelligence« should be taken in a metaphorical sense
But for the present it is necessary to clarify the notion of AI that we intend to use here. To do so we will rely on the definition devised by Carl Friedrich Gethmann and others in 2022, according to which AI describes the ability of computer systems to take over tasks that normally require human intelligence. Furthermore, it is important to distinguish between strong and weak AI. Whereas the former is equivalent or even superior to human intelligence and can take independent action, weak AI is tailored to the purpose of a specific application. As we know it today, AI fits into the latter category. Also, in a recent advisory opinion the German Ethics Council warns that we should take the concept of »intelligence« in a metaphorical rather than literal sense, since at bottom understanding that is merely simulated isn’t enough to count as genuine intelligence. In other words, simulated comprehension remains bound to other (human) mental capabilities.
Nevertheless, we should not let ourselves be deceived just because an AI is defined as weak. Even weak AI includes high-performing, innovative applications with formidable predictive power. For example, due to its capacity for data processing alone, AI can reach conclusions far more efficiently than trained medical professionals can. Across a wide variety of disciplines, researchers now are employing AI to facilitate their work.
Prospects and dangers
The application of AI is especially widespread in the fields of information technology and in many of the natural sciences. But even in the social sciences its use has been increasing considerably. For example, in climate research AI generates great quantities of data, making it possible to examine complex relationships among numerous constantly changing variables. Such AI-supported screening relieves researchers of an enormous work burden, which in turn enhances their ability to raise and investigate further questions. Also, in this context we should not fail to mention the cost argument. As path-breaking innovations become ever more elaborate and expensive, AI saves resources while speeding up the research process.
AI has other advantages as well. It can efficiently enable and mediate interdisciplinary cooperation among research personnel and can uphold scientific quality criteria such as the replicability of results. While researchers focus on new projects of their own rather than double-checking and trying to replicate pre-existing research findings, those tasks now can be taken over by AI. A further benefit may be found in the expansion of the hermeneutic circle. I can only wish to understand and explain a problem if I can see it as such in the first place. By helping researchers to recognize problems and potential distortions, AI expands the scope of the hypotheses that researchers can generate.
One of the most glaring weaknesses of AI is the lack of reflection
Nevertheless, enthusiasm about new technological possibilities should not lead us to romanticize them lest we overlook their potential risks. AI offers unimagined possibilities for social innovation and can ease the burden of work in the scientific establishment. But it should not degenerate into a gateway for the dumbing-down of skills or for the incursion of unscientific and unethical results into scientific studies. Thus, to ensure that AI does not jeopardize science, we must make it more rigorous. Good science demands a high degree of transparency and reflection. Yet that is one of the most glaring weaknesses of AI: It neither explains nor critically reflects upon what it does. So, if we can no longer mentally reconstruct the path that led to research results, we cannot maintain existing scientific standards.
The irreplaceability of science
Consequently, scientists themselves should act as a corrective for AI-generated results, question correlations in a critical spirit, and draw causal conclusions. They must hew to the ethical guidelines of their own profession so that, by taking advantage of new technological possibilities, they can attain improved scientific knowledge rather than passively accepting whatever outcomes the technology produces. It is precisely at this point that science proves that it is irreplaceable: explaining, questioning, and analyzing, upholding ethical principles, and recognizing discrimination and distortions.
In this case, it’s not a matter of pitting one side against the other: science or AI. The child needs its parents to become fully mature. Meanwhile, the parents can learn from their own child how to broaden horizons and lend a hand in shaping innovations. Thus, training data from the sciences helps AI learn, while scientists can use AI to process an incomparably greater mass of data, outsource unwelcome kinds of work, and drastically increase their efficiency. In sum, we are talking about using AI to upgrade science, while ensuring that science chaperones AI.
The expression »artificial intelligence« (AI) was coined back in 1955 by the American computer scientist John McCarthy. At the time, it was regarded as a subfield of information science, although the notion also turned up in science fiction literature. It means enabling technology to act appropriately and with foresight, using capabilities such as perceiving and reacting to sense impressions, acquiring, processing, and storing information as knowledge, understanding and generating language, solving problems, and attaining goals on its own. One of the subsequent problems is that it is no longer clearly evident what has been generated by AI and what by human beings. Also, the extent to which it may pose a threat to the decision-making sovereignty of humanity as a whole is regarded increasingly as an open question. (ed.)
As AI grows more pervasive, calls for its political regulation are becoming louder. In this context, however, the scope for action in scientific policymaking appears limited, at least for now. Of course, scientific standards are not defined by politics. But this does not mean that science policy should abstain from responsibility.
One aspect of science policymaking involves research funding for AI-related matters. After all, AI has not outgrown the formulation of scientific questions. Support programs such as the German Research Foundation’s funding initiative »artificial intelligence« may offer a contribution here, as well as setting standards. Investments must be made in research designed to make AI applications more transparent and understandable. That is the only way to ensure that research results achieved with the aid of AI applications will rest upon a scientifically solid foundation.
»Scientific access to data must be improved.«
One of AI’s greatest potential services to research consists in the collection and analyses of great quantities of data. But in order for scientists to make better use of that potential, access to data must be improved. In the agreement that preceded the formation of Germany’s current governing coalition, the parties pledged to pass a research data law, along with some so-called research clauses, to achieve that purpose. The law would establish a legal right of access to data, provided that its use in research is in the common interest. However, parliamentary deliberations on that proposal still have not begun, even though the rapid development of AI-based research manifests the urgency of accelerated legal regulation so that Germany will not lag further behind in the international competition.
Thus, a national research data law also could bring relief on the issue of data protection. The conflict between scientific freedom on one hand and data protection on the other has been considerably intensified by the use in the sciences of AI applications and reams of data. One of our priorities must be to make sure that, as before, easier access to data and the combining of data sets does not violate data protection guidelines. Thus far, the German states, data protection commissioners, and even scientific institutions themselves all have contributed in their respective ways to making it so. The research data law will bring into being a national framework that should prove more reliable and legally secure.
Yet in the final analysis there will always be a need for human beings who know how to handle AI responsibly. Here, educational institutions play a crucial role. The federal and state governments will be called upon jointly to design a scheme whereby young people in schools and universities will come to understand and know how to apply AI, and even be capable of developing it themselves in such a way that they too can benefit from it. That will involve not only the revamping of school and university curricula, but also, obviously, adequate technical equipment for the various educational institutions plus well-coordinated training courses and continuing education for faculty members in all departments
Consequently, so that we constantly to keep in mind the risks to the scientific establishment, the next generation of scientists must be made aware of both the possibilities and the dangers and limits of AI from the very beginning. Teaching will have to change, as will the format of examinations. No longer will the solution to a problem be the sole crucial evaluative criterion; the way in which it was obtained also—and especially--will matter a great deal. The transparent use of AI also will be important. AI does not deserve to be demonized by the professors. Instead, it should be moderated and meaningfully integrated because that is the only way in which the parents can protect themselves against someday being devoured by their children.