CEO, VFT Solutions, Inc. Unrivaled Experience in Content Protection & Cybersecurity. Creator of Patented Social First, Anti-Piracy System.
“The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” -Stephen Hawking
Professor Stephen Hawking gave the cautionary warning above almost six years ago. While no human can predict what the future will hold with specific accuracy, we can examine new technology and blend it with the social sciences and history to develop generalized ideas of the societal risks and rewards of said technology.
Before deciding whether to implement AI, we must use our imagination, experience and learning to enhance their understanding of how best to implement technology, predict intended and unintended consequences, and engage in their risk-benefit analysis. This process is an essential precursor for the introduction of artificial intelligence into everyday life. This analysis will challenge programmers to think critically and not merely accept statements from alleged “experts” who endorse a particular technology at face value.
Someone once said, “Any intelligent fool can make things bigger and more complex. … It takes a touch of genius — and a lot of courage to move in the opposite direction.” It’s been attributed to Albert Einstein, E.F Schumacher and Woody Guthrie in various publications, but for our purposes, the source of the quote is less important than the message. Caution dictates that decision-makers must be bold and brave, speak truth based on their experience and education, and not shy away from being unpopular or different.
The question that will confront all of us in the very near future is what will differentiate the human, which is organic in terms of its ability to make subjective judgments, from the humanoid, which is not organic, nor is it capable of subjective thought — only objective decision-making based upon its programming.
Today there is no shortage of conflicting viewpoints on the law, ethics and morality relating to autonomous technology and AI. Moving beyond what is their current nascent state, these technologies will undoubtedly continue to become staples of everyday life. To focus our discussion on technology familiar to most, if not all of us, we will examine the issues of AI and autonomous operation in social media.
In 2017, Liz Stillwaggon Swan wrote an article in the IEEE publication Technology and Society. She asked whether addiction to social media among our youth results in a rapid loss in writing skills proficiency:
“Social media platforms force users to think and write in bit-like form, with acronyms substituting for sentences and emoticons substituting for the expression of feelings. We are learning — some of us more quickly than others — to adapt to a computer-dictated form of communication. … We’re noting, in addition, what social media addiction is doing to written communication: specifically, it’s eroding the traditional divide between speaking and writing.”
According to a Stanford Social Innovation Review article, higher levels of the hormone oxytocin (the “cuddle chemical”) have the potential to be released in our brains when we interact with social media.
Noted internet and virtual reality pioneer Jaron Lanier recently wrote about these and many other concerns in his 2018 book, Ten Arguments for Deleting your Social Media Accounts Right Now. In an April 2018 interview with the Intelligencer, he spelled out his concerns about the damage social media may be doing across society:
“One of the things that I’ve been concerned about is this illusion where you think that you’re in this super-democratic open thing, but actually it’s exactly the opposite; it’s actually creating a super concentration of wealth and power, and disempowering you. This has been particularly cruel politically.”
Dr. Yaniv Levyatan wrote:
“Our behavior in the social networks, which we perceived as something innocent and mundane, has become an instrument through which we can be influenced via manipulative techniques. The information we volunteer, such as Likes, make it possible for those who want to, to understand how to communicate with us in a precise way.”
Research shows that people touch their phones an average of 2,617 times per day, so if you wanted to get a message in front of a target’s eyes, the phone and social media apps are the way to do it. When combined with addictive behavior and a growing reliance on short-form messaging and content, I believe that critical thinking is under attack.
While our collective march to becoming a generation of humanoids is concerning, this Terminator-like behavior can certainly be addressed in a number of ways:
1. Stress critical thinking at all levels of education, from pre-school to graduate school. Sadly, many parents now use technology as a babysitter, educator and, in some cases, a proxy-parent when work, life and Covid-19 stress make traditional parenting challenging, if not impossible.
2. Slow the tech-education pipeline where “free” technology such as Chromebooks, Macs or other devices in everyday learning are not free. Full transparency is vital here, especially when cash-strapped schools are only too eager to take these gifts.
3. Social media companies are simply not transparent in their “moderation” methods, search-and-feed curation algorithms, and true identity behind the content creator. Their oligopolistic market position requires true transparency and government oversight. The science of addiction is too compelling to assume that social media companies are not engaged in or tempted to engage in nefarious activities designed with their bottom line, not the good of mankind, as the primary objective.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?