By JOHN MAGLIOCCO JR. //
With highly sophisticated algorithms and rapid advancements in storing, processing and analyzing data, artificial intelligence is making its presence known in the world today. With this, an ethical dilemma has surfaced that needs to be addressed immediately.
Who will be responsible for the ethical decisions made by artificially intelligent machines? Failing to respond to the relevance and significance of this dilemma could skyrocket crime and death rates across the world.
The ethical dilemma most recently displayed its presence with Tesla’s new autopilot feature. An article by cryptocurrency news outlet CCN published in January reported a total of 50 Tesla-related fatalities in 2019 alone. While the causes of these fatalities continue to be investigated, ABC News was able to confirm three deaths were directly associated with Tesla’s autopilot feature.
Meanwhile, the Victoria Transport Policy Institute states that many “predict that by 2030, autonomous vehicles will be sufficiently reliable, affordable and common to displace most human driving.”
With a large portion of the automotive industry developing autonomous vehicles, government officials are wasting indispensable time by not discussing who should be held responsible for potential occurrences of tragedy.
Boston Dynamics, a world leader in mobile robots, recently reported on its new Atlas model. The company’s website displays the shocking capabilities of the robot “to deliver high power to any of its 28 hydraulic joints for impressive feats of mobility.” It notes that “algorithms reason through complex dynamic interactions involving the whole body and environment.”
With AI-powered robots like that interacting with humans on a daily basis, it is imperative to address the underlying ethical issues before jobs or lives are lost.
Even with developed software such as iOS, large companies like Apple are continuously finding new bugs and releasing patches. According to VentureBeat, since January 2011, the frequency of iOS updates has increased by 51 percent. Whether it’s software for mobile devices or robots like Atlas, the solidified presence of software bugs will only continue to amplify technical uncertainties.
This does not imply that complete artificial intelligence is plausible because, although AI could complete many human tasks, it still lacks free will and consciousness. Robots will never have the ability to act at their own discretion, nor will they ever understand why they’re completing the task at hand.
Moreover, AI machines presumably will never develop sapience nor sentience, both of which exist in the human brain. Their ranges of functionality and capability simply reside within the thousand lines of code existing within their hardware.
But unethical groups such as hackers, terrorists or even countries could recognize an opportunity to hide behind the unforeseeable disguise of AI-powered machines. And that’s why the ethical decisions made by artificially intelligent machines must be deemed coequal to the ethical intentions of their creators, the computer scientists.
The solution to this arising dilemma calls for strict intervention from government officials and United Nations leaders. Our world leaders must increase the monitoring and auditing of AI development.
One way to accomplish this is to require teams of AI development, by law, to periodically submit official progress reports to the government. Detailed reports will allow officials to identify the assigned tasks and responsibilities for each computer scientist working on a project. Furthermore, start/finish times, end results, bugs, the testing environment and general comments should also be recorded.
World leaders must urgently take on this ethical dilemma of AI, before our impressive discoveries become our deepest regrets.
John Magliocco Jr. is a student at Adelphi University, where he studies cyber-law and ethics.