This term essay was written for UC Davis ECS188: Ethics in an Age of Technology. It explores the ethical concerns surrounding the rapid advancement of autonomous technology, including issues of liability, security, privacy, and its impact on the job market. The essay traces the historical evolution of autonomous technology, from early automation in Mesopotamia to modern AI-driven systems, and highlights the ethical dilemmas posed by their misuse, such as weaponized drones, data privacy violations, and job displacement. It argues for the need to update regulations and policies to address these challenges while emphasizing the importance of education and adaptation to mitigate the societal impact of these technologies.
Introduction
As technology moves at a fast pace, a lot of the traditional products and tools have been evolved by adding intelligent electronic components and making connections to the internet. Many of the companies have implemented those new features that attract customers’ attention. When someone mentions the word “Autonomous” in their product, it draws the most attention as the customer usually wants these products to make their life easier. While most people are happy about what kind of benefits autonomous technology brings to daily lives, there were ethical concerns about autonomous technology from the previous centuries. According to Langdon Winner in his book Autonomous Technology, “many of the poets, novelists, scientists, and philosophers between $19^{th}$ to $20^{th}$ centuries” have concerns about the technology that is not guided by human and “self-directing”, or beyond the limit of human control (Winner, 1977, p. 13). These comments were made before autonomous technology been invented widely in all different fields. As of 2024, most of the industry has autonomous technology involved. The news sometimes will post information about some accidents that are caused by autonomous technology. This paper will discuss the current and potential future ethics concerns in terms of using autonomous technology. From my perspective, autonomous technology can be dangerous and harmful if people keep ignoring the ethics issue related to it. I believe that autonomous technology should have some policy to constrain the use of such technology. I will analyze some of the current solutions about autonomous technology.
Background
The concept of autonomous technology is not consistent throughout the history of the world. It can be traced back to Mesopotamia civilization. In the journal The Water Clock in Mesopotamia by David Brown, a detailed description of how such device works has been provided and as evidence of the earliest existence of autonomous technology (Brown, 1999). By that period, an autonomous technology can be described as a machine that can do automation without any human intervention. People were still relying on hands to harvest products and autonomous technology was developed slowly without scientific development. Going forward to the first industrial revolution in 18th century, the first-time human found a way to use machine to replace hand operation for mass production. As all the automation at that time was driven by traditional mechanism, this changed the concept of autonomous technology related to iron and steam. The building of railroad and train helped to create much more efficient transportation. Right after the first industrial revolution, electrical energy was used massively in machines, make them to be capable of doing more stuff. And now, the revolution moves to the computer and AI autonomy in the recent decays. This ranges from robotics, Internet of Things, and vehicles to software AI like ChatGPT. While these technologies are widely used throughout people’s lives, discussion on ethical problems are popular throughout newspapers, journals, and research. Although the ethical issues are different between each revolution of autonomous technology, there are common problems with the nature of autonomous machines. A discussion on those problems is important and solutions should be considered for such problems. Problem Analysis
Problem Analysis
While autonomous machines can make much less error compared to human operation, the liabilities and the security are a major problem among autonomous technologies. There has been autonomous technology used in crime. Robotic process automation is also known as the software robot, has been used widely for phishing on the internet. People see the bots everywhere on social media, emails, and message software. While the majority of people can recognize such fraud by easily identifying the difference between sentences generated by AI and human, AI is growing rapidly, and it will be harder for humans to identify the authenticity. Another example of abuse use of autonomous technology is the weaponized drone used in war. According to the research by Kunertova, Russian uses “self-detonating Iranian Shahed-136 drones”, which can fly 2000 km far and have a 20% success rate of by-passing Ukraine’s defense system (Kunertova, 2023, p. 582). Even though the rate does not seem that high, the low cost of drones and the advanced level of autonomous features make them buy more from the companies. This raises the problem about who should be in charge of such liability of being harmful to society. Are the companies who made this technology should be charged? Or are the people who use such technology should be charged? According to a discussion by Asaro, “Traditional approaches to handling liability are inadequate for dealing with autonomous artificial agents due to a combination of two factors–unpredictability, and causal agency without legal agency” (Asaro, 2016, p. 191). The unpredictability of autonomous technology makes it hard for these technologies to do critical tasks like autonomous vehicles. The share of such technologies can be harmful when the agent uses it in an unethical way.
With the development of advanced internet technologies, the problem of privacy has been playing a critical role in ethics discussion. As many of the autonomous technologies require the user data as input through internet, people have concerns about their privacy using such technologies. For example, IoT devices are autonomous technology that allows automate the task for many homes electronic devices from washing dishes to smart light bulb. However, these devices require large amounts of sensors to allow the software to know what next action to do, which will collect user data and push it to cloud for processing. Even with the end-to-end encryption of the data, the privacy issue still exists. An example provided by Fritz about such a situation is “such data from smart sex toy being used to enhance a user’s experience, but it could just as easily be used to provide detailed demographic information that could go to marketing or product development” (Fritz, 2018, p. 59). As the companies have a copy of the user data, the privacy problem can only depend on the company’s willingness to sell that data to the market team. In Ashwin’s research about IoT, he lists the regulations or policies against IoT across the world. The EU made General Data Protection Regulation in three directions: cybercrime, radio equipment handling and cybersecurity (Ashwin, 2021, pp. 12). Even though some of the countries have no specific rules for IoT or autonomous devices, they are usually under the regulations of the general data protection rule. There may be problems with this. As some of the data protections are only against outside use for data, the company may still abuse use of user data.
The definition of Autonomous technology has a counterpart against human labor, as it aims for humans to do less work, and work much more productively. In recent years, the job transition has been impacted by such autonomous technologies. As OpenAI and its ChatGPT are viral on the internet, people are amazed about how a machine can provide such a natural interaction with humans as well as providing all the information from their request. A study about Generative AI by Lazaroiu showed a list of benefits of generative ai against humans, including how AI can use powerful computing resources to be more efficient (Lazaroiu, 2023). All such benefits give a warning to the human job market. An example of such diminishing job is the restaurant clerk. Nowadays, restaurants have started to use robots for serving the dishes and taking orders from customers. While some of the roles are still humanly controlled, the progress of autonomous technology may eventually replace part of those jobs. In the research from Hendrickx, a rethink for the labor law idea is introduced to tackle the job displacement by autonomous technology. He mentioned that the classical labor law is established by globalization, economic competitiveness in the 1980s and 90s (Hendrickx, 2018, p. 372). The relationship between humans and machines has been shifted from tools to actual workers, and such change should be regulated by new policies to limit the behavior of new autonomous technology.
Discussion
From the ethical concerns above, autonomous technology has been evaluated over the decays. There has been a variety of directions about the regulation and policy on autonomous technologies. As regulation and policy can be applied on all the ethical issues discussed above, there are many specific discussions on each part of autonomous technologies. For example, the “Three Laws of Robotics” by Isaac Asimov presented about three constraints of robotics in terms of possible harm to humans, obeying order, and protecting themselves (Asimov, 1941). While this is the earliest regulation against autonomous technologies and robotics, the action is not ready as robotics by that time was really limited in capabilities. And now in the era of 2024, there are much more detailed regulations against autonomous technology and robotics. However, the update of such policy is still behind the advancement of technology. In the research by Donald Leenes and the team mentioned a case about a surgical robot called Da Vinci Si, where the robot is treated as a normal surgical device but operations differently than regular surgical devices (Leenes, 2017). The manufacturer is involved in several lawsuits because the robot requires training to operate, and their company does not have any restriction about who is certified to operate this robot (Leenes, 2017). This case is caused by the lack of updated regulations on new technology as there were no regulations on surgical robots at the time. From this point, I believe that updating regulation and policy on time about autonomous technology is a good approach to make less ethical issue.
Although the problem about the job market can be possibly solved by regulation and policy, it has become harder to solve recently. As these regulations and policies can be a guarantee for unemployed people to be not replaced by autonomous technologies. The productivity of these technologies can be much more efficient than human, which may cause a limitation on the manufacturing side. Instead of providing such guarantees for these traditional jobs, a new renovation should be allowed in a good way. By that means, the college and Universities should provide training to every person about how to handle the change of autonomous technologies, especially about how to self-learning and tackle the benefits brought by these technologies. As the large language model has become the popular autonomous technology that can be a better assistant in every industry, The assistant type of job is in high risk of been replaced. The regulation should not constrain such useful tools. Instead, it should provide guidance about the transition of job types and how to learn to suit next levels of new job types.
Conclusion
As the progress of autonomous technology grows beyond the expectations of everyone, the ethical concerns about it should be analyzed and finding solutions should be crucial for humans. With the discussion about liability and security, privacy, and impact on job market using several examples from other’s research, autonomous technology has the critical impact in the future. Regulation should be made against autonomous technology to constrain the behavior of it and make the researchers and engineers develop new autonomous technologies on the right track. As some of the regulations and policies are already made to tackle such technological change, some more specified regulations should be initiated and modifications to the old regulations should be made to catch the change of autonomous technology.
References
- Winner, L. (1978). Autonomous technology: Technics-out-of-control as a theme in political thought. Mit Press. Brown, D., Fermor, J., & Walker, C. (1999). The water clock in Mesopotamia. Archiv für Orientforschung, 130-148.
- Kunertova, D. (2023). Drones have boots: Learning from Russia’s war in Ukraine. Contemporary Security Policy, 44(4), 576-591.
- Asaro, P. M. (2016, March). The liability problem for autonomous artificial agents. In 2016 AAAI Spring symposium series.
- Allhoff, F., & Henschke, A. (2018). The internet of things: Foundational ethical issues. Internet of Things, 1, 55-66.
- Karale, A. (2021). The challenges of IoT addressing security, ethics, privacy, and laws. Internet of Things, 15, 100420.
- Lazaroiu, G., & Rogalska, E. (2023). How generative artificial intelligence technologies shape partial job displacement and labor productivity growth. Oeconomia Copernicana, 14(3), 703-706.
- Hendrickx, F. (2018). From digits to robots: the privacy-autonomy nexus in new labor law machinery. Comp. Lab. L. & Pol’y J., 40, 365.
- Asimov, I. (1941). Three laws of robotics. Asimov, I. Runaround, 2, 3.
- Leenes, R., Palmerini, E., Koops, B. J., Bertolini, A., Salvini, P., & Lucivero, F. (2017). Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law, Innovation and Technology, 9(1), 1-44.
