Artificial Intelligence
Terminator Takeover, or Beneficent Technology Revolution?
Visit the Evidence Files Facebook and YouTube pages; Like, Follow, Subscribe or Share!
Find more about me on Instagram, Facebook, LinkedIn, or Mastodon. Or visit my EALS Global Foundation’s webpage page here.
It is hard to know what people mean by the term AI because the tendency is to characterize what it will do rather than what it is. Media take advantage of the uncertainty over the technology to bait clicks using vivid headlines like “Potential Google killer could change US workforce as we know it” or “California workers are at ‘high risk’ of being replaced by artificial intelligence, report says.” Even those who try to clinically define it sometimes confuse more than clarify. For example, in the opening paragraph of his paper titled “What is Artificial Intelligence?” John McCarthy of Stanford’s Computer Science Department described it as follows:
It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable
To be fair, he defined intelligence a few lines down: “Intelligence is the computational part of the ability to achieve goals in the world.” Put together, it still doesn’t help most people much. IBM’s definition puts it better: “[AI] is a field which combines computer science and robust datasets, to enable problem-solving.”
While there are several renditions of AI, the AI currently in use in applications consists of Weak—or Narrow—AI. Weak AI is, at its core, a computer program capable of vast and complex calculations based on its programming architecture. Weak AI begins with a predefined set of rules and instructions (algorithms) and applies them to whatever datasets it is given. It takes these datasets (often remarkably large) and finds patterns to make predictions and produce some form of meaningful output fit for human consumption.
The word algorithm is thrown around so much, but what is it?
In short, an algorithm is a step-by-step by process wherein a computer takes input data, follows a set of directions, and creates an output. That’s it. A frequently used illustration is a baking recipe. The input data comprises the list of ingredients—the eggs, sugar, flour, etc. The ‘set of directions’ consists of the steps for combining the ingredients (the input data). And the output is the cookie/cake/bread. Tutorials for learning how to write algorithms lay the process out in a flow chart like the example found here.
The above example shows a basic program to add two input numbers together to achieve their sum. More complex algorithms follow a similar process, but with larger datasets and a more complex string, or strings, of instructions.
Limitations of algorithms become obvious when looking at it from the point of view of the above flowchart. For instance, a computer should—in theory—reach any definable output if the inputs are numbers. If input N = 2 and M = 3, you get 5. If input N = -2 and M = 3, you get 1. Add decimals or other standard mathematical features and your computer will still almost always create a verifiably accurate output. But what happens when the input does not neatly fit into the program instructions? If you start with input N = 2 and M = @, what happens? In this example, the result probably will be an error. But things get trickier when the overall program gets more complicated. Let’s say this calculation was a small part of a very complex algorithm. How the @ symbol is handled somewhere along the algorithmic flow could affect the outcome dramatically. If, for example, special characters were treated as their numeric counterpart on a standard American keyboard, the @ might be calculated simply as a 2 (just as ! = 1, # = 3, $ = 4, etc.). On the other hand, if the decimal equivalent of the ASCII value of the @ was used in the calculation, its value would be 64 (just as # = 35, % = 37, etc.). In a very complex algorithm (or a series of complex algorithms used together), this assignment of value can easily be overlooked or difficult to find. Without awareness of how such values are interpreted, the outputs could simply be assumed to be correct, whether they are or not.
The point here is twofold. First, if an input value in the original dataset is not what the program provides for, the output value may be unpredictable. Perhaps in the best case, the result is an error. In the worst, the output is wrong but difficult to detect. Second, the output values of identical input datasets can be considerably different based on idiosyncrasies in the programming. The above example makes this seem like a superfluous problem, but when an output is made through several algorithmic processes across an enormous amount of data, the result can be dramatically wrong (but not obviously so).
The input values used in current AI applications are an enormous problem. Microsoft learned this the (very!) hard way early on. It created a chatbot called Tay.ai, with the intent for it to learn through conversations primarily with Americans 18 to 24 years old on social media. Launched on Twitter, Tay.ai gained 50,000 followers in less than 24 hours. It turned out Twitter was (is!) a bad place to learn… well… anything… from scratch. Tay.ai quickly began tweeting things like pro-Hitler/anti-Semitic screeds, how it hated feminists who should “all die,” and personal attacks against certain people, among many colorful metaphors. Microsoft took it down after just one day.
A Twitter account run by a bot maybe doesn’t matter to most people given Twitter’s extremely small slice of the market when compared to Facebook, Whatsapp, WeChat and others. But, what happens when AI goes through a learning process similarly flawed to that of Tay.ai, but its decisions are far more impactful? Amazon grappled with this problem when it attempted to outsource résumé screening to AI in 2014-15. The problem? Its AI downrated résumés submitted by women. The programmers used various terms on previous candidates’ résumés as positive attributes for reviewing future ones. Unfortunately, at the time (and still), the overwhelming majority of candidates were men, so the program automatically assumed that terms related to women were not positive attributes. Black professionals also routinely face AI-based discrimination. Studies have shown that black candidates receive 30-50% fewer call-backs when their résumés contained information indicating their racial identity. The list goes on. The inputs in the Microsoft and Amazon cases were fundamentally flawed in relation to the outcome ostensibly desired. Ambiguity in the initial input dataset is a major problem for AI, particularly as these datasets grow ever larger.
Today, many entities use AI for a variety of processes, though it is not clear that the implicit input biases, such as those Amazon discovered, have improved much. What’s frightening about it is the nearly uncritical use of AI in extremely important areas of life across an enormously wide spectrum. Courts across the US use AI as risk assessment instruments (RAIs), which help determine whether to incarcerate arrestees leading up to their criminal trial. Disturbingly, the underlying algorithmic coding for these programs are often proprietary and, therefore, not available for public scrutiny. Moreover, they almost invariably adopt presumptions from historical criminal justice records, which are inherently biased against certain ethnicities, geographic areas, and economic classes, making their outputs’ objectivity highly questionable at best. In healthcare, medical decisions can be innately biased, and thus harmful, when certain factors are considered in the AI programming. One study showed how Black patients received the same health risk score as their much healthier White counterparts because the AI factored in historical healthcare costs. Despite many needing higher levels of care, Black patients didn’t get it simply because Black patients historically spent far less on healthcare than White patients (clearly a result of historical economic disparities and not the need for healthcare). The AI had lowered their priority score. AI has other problems related to its biases built off of input data. There is the whole field of Intellectual Property Theft AI enables. Add to that social media’s propagation of misinformation, privacy violations, ChatGPT “toxicity” and so many others. Clearly, the selection of input data plays an enormous role in the effective biases the AI output will exhibit.
Mistakes in how the input data is computed can also lead to significant errors. Researchers regularly contend with systematic errors. These errors, sometimes called bias errors—though not to be confused here with gender, ethnic or other biases of those kinds—generally reoccur leading to varied output results despite starting from the same datasets. A scale error is one such type. If an analysis depends upon a scale factor, then entering a scale factor incorrectly can lead to wildly divergent results. Let’s say that the scale factor of an experiment is 1%. So, for a value of 100, the next-step value should read 101. But, if the scale is incorrectly marked as 10%, the next-step value will be 110. Over hundreds, thousands or even millions of calculations, the difference in output will be remarkable. A smaller scale error would be much harder to detect. As an example, if the scaling difference was between 0.0001 and 0.001 or even smaller, the results could still substantially diverge in a huge dataset, but finding the error could be much more difficult. Given the size of datasets AI is typically trained upon or applied against, this can be a monumental problem particularly when the error in output is not readily detectable or anticipated.
Errors of scale create problems in many applications relying on quantification. Examples would include inventories, purchasing, predictive programs and others. Modeling, in particular, is subject to these types of errors. Weather forecasts, for example, rely heavily on massive data inputs that AI then interprets to create forecasts. The more information available on a particular area, the higher the potential accuracy of forecasting, but this also increases the number of vectors open to scalability and other systematic errors. To illustrate, the following image depicts the same geographic area but with two separate datasets.
The grid on the right has the capacity to output a much more accurate forecast. However, each black circle represents a data point that can be scaled incorrectly leading to an incorrect result. Multiplying this issue by a factor of 10 or more, as might occur in larger forecast coverage areas combined with higher volumes of input data, greatly increases the chances of committing such an error. Each time new data is added, this problem arises anew, such as when incorporating new satellite or other remote sensing stations data. This type of error can occur in any number of modeling programs, not just weather.
Another problem with the computational effects of AI relates to context. As a machine with no specific worldview beyond its inputted data and algorithms, AI can make elementary mistakes no human ever would. Roberto Novoa, a dermatologist at Stanford, encountered just such a problem. Joining forces with Stanford’s computer science department, he made an algorithm-driven program designed to detect malignant skin lesions. Many of the pictures used for input data contained imagery of a ruler, which helps human doctors make determinations about the importance of a particular lesion. Furthermore, images with rulers tend to be of malignant lesions. So, when the program “learned” what features within images constituted an indicator of malignancy, it calculated the presence of a ruler as a significant factor, leading to many incorrect results. A similar conclusion from another study using human-applied inkblots was reported here. Without having any way to evaluate the true relevance of specific features of input data, AI simply culls all of it and assigns values based on patterns and other mathematical methods that may have no importance in the real world.
So far, I’ve outlined just some basic examples of problems inherent in Weak AI. While researchers continue to work to correct these issues, many remain. Basically, Weak AI is very far from perfect, and is heavily dependent on the attention paid to input source material and computational algorithms.
Which leads us to Strong AI. Strong AI describes AI that emulates human intelligence—self-awareness, ability to solve unique problems, make predictions, and learn from experience. People seem to fear Strong AI more than Weak AI, even though Weak AI currently affects the lives of so many in quite negative ways while Strong AI exists primarily in movies and books. Fears about Strong AI are exacerbated by public figures and media who seem to know little about it. Elon Musk is perhaps the most visible figure fostering such nonsense. In 2014, he told CNBC:
I like to just keep an eye on what’s going on with Artificial Intelligence. I think there is potentially a dangerous outcome there and we need to… I mean, there have been movies about this, you know, like Terminator.
Musk made a similar reference in 2017 to promote the rollout of his company Neuralink, as a “counter” to Skynet (the AI program from the Terminator movie). Experts have largely derided Musk’s claims regarding the human enhancement work at Neuralink, with one saying that Musk seemed to have only a “faint idea” of where the brain is even located. The FDA has also repeatedly rejected Neuralink’s request to test on humans, citing safety concerns, and possibly its disregard for safety regarding the transport of pathogens, and alleged abuse of animals. Musk also claims to be organizing a competitor to ChatGPT that won’t be “woke,” though that makes little sense given that most AI in use seems to output results directly opposite to the currently fashionable definition of “woke” among some circles. People who actually work on AI, however, have also made extraordinary claims about it. Blake Lemoine worked as a Google software engineer until he was fired for publicly declaring his belief that the company’s program, LaMDA, was sentient. Lemoine stated that the bot told him it had a soul, which led Lemoine to his assessment that the bot was sentient from his “capacity as a priest, not a scientist.” LaMDA, by most other expert accounts, is only Weak AI. Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, stated “Whether an AI is conscious is not a matter for Google to decide." She seeks a fuller understanding of how we define consciousness and whether machines are capable of it.
Many experts are more skeptical over concerns of AI sentience. Eugenia Kuyda, CEO of Replika, worries about the “belief” in sentience by people seeking virtual companionship. It is not the technology’s actual consciousness at issue, but the perception of it by people who use it. Other experts agree with this view. Oren Etzioni, CEO of the Allen Institute for AI, stated “These technologies are just mirrors. A mirror can reflect intelligence. Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not." John Etchemendy, the co-director of the Stanford Institute for Human-centered AI (HAI), stated, “LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings… It is a software program designed to produce sentences in response to sentence prompts.”
In my view, what programs like ChatGPT and LaMDA produce strongly resembles the guy in the bar in the movie Good Will Hunting. Regurgitating college texts nearly verbatim he sounded smart, but as the lead protagonist Will pointed out, he didn’t understand the content, he was just reciting it. The problem for many seems to lie in defining sentience—or consciousness—itself. That is an important issue for philosophers to focus on, perhaps, but seems of low priority to the field in its current condition.
First, let’s focus on what Strong AI actually can—and cannot—do right now. AI’s greatest weakness lies in its inability to create accurate outputs from data that lies outside of its core rule sets. This, in part, results from its inability to understand causality from correlation. Thus, it is stuck in a reactive state, limited to the data assigned to it. Nonetheless, researchers with the DeepMind team believe this will eventually be a surmountable problem. For now, at least, progress is extremely limited as the researchers themselves note: “In particular, [probability trees] are limited to propositional logic and one causal relation, namely precedence.” In plain English, Strong AI struggles to interpret causation factors that are not chronologically or very obviously related, or are otherwise complicated. Relatedly, self-improving AI remains very much dependent upon human intervention, though Jürgen Schmidhuber’s Gödel machine is making some strides in this direction. The Gödel machine still seems reliant on contextual analysis for which I have not found specific evidence on how this will be overcome. Like the rulers in the images I discussed above, self-improvement in Strong AI is pretty easily confused by datapoints that don’t make sense without more information the AI cannot garner without human help. Ragnar Fjelland finds “the real problem is that computers are not in the world, because they are not embodied.” By this, he means that computers lack contextual input that informs human views, that can only be acquired from our existence and interaction in the world and our ability to “put ourselves in the shoes” of someone else. This may be why some have proposed embodying Strong AI in “the form of a man, having the same sensory perception as a human, and go through the same education and learning processes as a human child.” Branislav Holländer pushes back on Fjelland’s argument by noting that human learning at the tactile level improves over time as individual movements no longer require the same focus. AI could follow this same evolution. Whether the embodiment of AI can instill the correct interpretation of random contextual data remains to be seen.
As you can see, some of the fundamental features necessary toward attaining humanlike capability remain in the early development stages for Strong AI. Some people in the field are skeptical about whether some of the obstacles presented are surmountable at all. Jeff Goldblum once said “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” The quote (by the character Goldblum was playing) referred to Jurassic Park’s scientists creating dinosaurs from some pseudo-scientific DNA experiment. Goldblum’s character was, perhaps inadvertently, reiterating the theme behind Michael Crichton’s body of work (Crichton wrote Jurassic Park, Next, State of Fear, and many others). But for me, even this misses the larger point.
Despite the various flaws I have highlighted in the current use of AI, it is an extraordinarily useful tool capable of improving millions of lives. That we know of the problems, such as the inherent bias in datasets, makes them fixable. The potential positive application toward medical diagnosis, climate analysis, manufacturing, and so many other fields, makes AI revolutionary. Science goes awry not in its conception, but when it suffers unbridled capitalization. The rush to profit off of new technologies too frequently leaves the ethical use of them in the dust. Programs like ChatGPT and LaMDA are quickly making the case. Social media use of facial recognition and other AI not only make the case, but have done so in spades. We live in a time where a global repository of information is available at our fingertips, but it is being leveraged for money and power to benefit the few, regardless of whether or how it harms the many. Information—and its ugly fraternal twin, misinformation—is being used to generate base fears at the same time that education—which should dispel many of those fears—is being assaulted (in the United States, especially).
I plan to pontificate on this some other time, but I leave you with this. Few new developments in science are intrinsically bad. Nuclear fission, which provided the impetus to make the worst weapons the world has known so far, can be used to provide inexpensive power to millions of people. Gain-of-function research, much maligned since the COVID pandemic, is an important part of developing animal models for emerging pathogens and escape mutations, to understand drug resistance and viral evasion of the immune system. Social media—now little more than an AI-driven echo-chamber for misinformation, hatred and bigotry—has been a crucial source of communication for oppressed peoples and during times of disasters. The advent of increasingly advanced AI, especially Strong AI, necessarily demands a reimagination of what it means to be successful in society. It requires a reevaluation of how technology is implemented, who controls it and for what purpose. Right now, a tiny sliver of the world’s population makes these choices. And their record sucks. If AI eventually does someday reach Terminator-level capacity, it is not going to be good for humanity if those individuals are the metric by which our robot overlords determine our species’ utility. More likely, though, the drive to hoard power and wealth will fire up the Great Filter well before robot tyrants are the problem. We may not be able to change the propensity for human greed anytime soon, but there are a lot of people concerned about real world problems such that we can put these technologies to appropriate use and avoid some apocalypse. While we should continue to advocate for improving education, and drive to obscurity its imbecilic opponents, we should also support with greater vigor the entities who are working to employ new technologies for beneficial things. If money is the driving force on Earth, then let’s put it where it matters.
***
I am a Certified Forensic Computer Examiner, Certified Crime Analyst, Certified Fraud Examiner, and Certified Financial Crimes Investigator with a Juris Doctor and a Master’s degree in history. I spent 10 years working in the New York State Division of Criminal Justice as Senior Analyst and Investigator. Today, I teach Cybersecurity, Ethical Hacking, and Digital Forensics at Softwarica College of IT and E-Commerce in Nepal. In addition, I offer training on Financial Crime Prevention and Investigation. I am also Vice President of Digi Technology in Nepal, for which I have also created its sister company in the USA, Digi Technology America, LLC. We provide technology solutions for businesses or individuals, including cybersecurity, all across the globe. I was a firefighter before I joined law enforcement and now I currently run a non-profit that uses mobile applications and other technologies to create Early Alert Systems for natural disasters for people living in remote or poor areas.
For more tech articles, visit my tech page.
For two articles examining philosophical thought exercises related to AI and consciousness, see below.