In a December article in the widely acclaimed scientific publication Nature, we are told about a new application of AI: decoding the language of the animals. Here, AI is used to determine how animal sounds correspond to semantic information. According to the article,
The hope of a growing number of biologists and computer scientists is that applying AI to animal sounds might reveal what these creatures are saying to each other.
This is a great example of how new technology embeds itself into culture. The core purpose of AI is to replace human beings, and that is where all the funding is coming from. Its primary method of advancing in our culture is through giving people small economic advantages, which results in pressure on others to use it, even if they don’t want to.
Through this arms race, we all lose, because AI will require greater and greater amounts of energy, in total. There will be no end to improving its growth, which will require limitless energy and limitless materials. An intrinsic facet of AI is thus unsustainability.
An emergent social phenomenon of AI, like many technologies, is that it presents itself as a solution to seemingly useful problems. “We can understand what animals are saying”, and automatically such a statement is perceived as good because “all research is good” and “yay, science!”
Indeed, there is a general cultural belief that understanding nature is good and that somehow such understanding brings us closer to nature. It does not. Instead, much of modern research has been subsumed into the technological machine and serves to distract rather than help. The mechanized scientific approach to the natural world is a dissection of it for our own amusement and further supports the idea of exploitation rather than exploration. It is no wonder that in Portuguese, the verb explorar can mean both to exploit and explore.
There is nothing intrinsically wrong with wanting to understand something using the scientific method. What is wrong is that today, the scientific method has become mechanized and thus exploitative. This use in fact of AI is one of the best examples: science uses AI, gives it legitimacy, even though the very nature of AI goes against the needs and the good of the biosphere because it is so destructive.
I don’t blame scientists. They are, for the most part, completely lost in their own domain. In their world, technology is a tool to be used as humans see fit. They do not see how technology is embedded in society with a determistic force that makes initial optional technologies mandatory, and that such resulting technologies are the prime cause of the ecological disaster.
Scientists operate as automatons, because technology has restricted their freedom to the narrow domain of exploration for the sake of exploration, an entirely harmless and even useful philosophy in vacuo but destrutive in the larger complex of modern industrial civlization. No wonder Henryk Skolimowski wrote that,
Present academics, by and large, have no firm moral convictions. Living by the creed of rationality leads to the atrophy of courage in a deeper sense of the term.
Skolimowki here was referring to the narrowing of the mental domain of academics to pure rational thought, which prevents them from functioning in a greater moral sense. Because they can only function as discovers of knowledge and because their livelihood is tied to publishing knowledge through grant money that ultimately flows from economic utility, they have no recourse to any other mode of thought including moral consideration.
Of course, scientists by and large follow the prevailing morality of modern industrial civilization. That’s why they have ethics boards, but such boards are limited to the larger morality of the system.
In the case of interpreting animal sounds, scientists have yet again adopted a potentially dangerous technology without question, which gives validity to the technology in the minds of the average person. They are basically screaming to use this technology, and merely pay lip service to safety by participating in the impotent AI ethics discussions that are sure to never deviate from the interests of the machine.
The truth is, there is really no reason to decode the language of the animals. That is not to say that we should not study and observe animals, but when it comes to protecting the biosphere, there are plenty better things that we can do such as rewilding and just leaving nature the hell alone. By continuing along more and more esoteric lines of research to understand everything, they only provide intellectual stimulation to the curious and potential avenues of exploitation by capitalists and technologists. (If we can understand the calls of animals, we also may have the capability to more effectively find rare ones for poaching, for instance. Or even less dramatically, we can simply use this understanding to better evaluate the economic tradeoff between deforestation and preservation.)
That’s not to say that the scientific method can never help protect the biosphere. Of course, there is a small body of scientific research that could potentially be very helpful in that cause. But in this case, to adopt a dangerous technology like AI and give it credibility and a face of benevolence does a disservice to all of humanity and the biosphere.
Don’t get me wrong, I like science and math. That’s why I got my PhD in pure mathematics after all. But science ultimately must face the fact that it has been co-opted by the technological machine. In other words, while it can operate in helpful ways in a narrow domain, its ultimate purpose is to provide theoretical material with which technology can grow, and to make the growth of the machine as smooth as possible so that it doesn’t trigger a revolt before it has a chance to completely make humanity either subservient to it, or completely extinct.