These days, artificial intelligence to the average person is a term that is interlinked with black-magic. But believe it or not, back then, "AI" was more rule-based, any person can follow. How has AI evolved from barely being able to complete a game of checkers, to now being the backbone of various language models (Like Grok) that's used in our daily lives?
There are various different definitions of artifical intelligence, each subscribing to one of it's subsets. AI is generally an umbrella term, so depending in what context you are talking about, AI can refer to various different things. But in general, artificial intelligence is the ability to automate intellectual tasks that a human being can do. This definition of AI is different from machine learning which is about training models to learn data classification without hard-coded logic.
Symbolic AI is simply simulating the process of automating various intellectual tasks using hard-coded logic. It was pretty much the AI before any machine learning actually took place. If you really think about it, even today, Symbolic AI is present in a lot of places. NPC pathfinding in various video games, (however crappy and limited they might be), some bots for various online board games, grammar checkers, etc.
This form of AI started in the 1950s when a bunch of pioneers of various CS institions philosophized and thought whether or not today's classical computing could actually be made to 'think'. This is a question that is still debated today, completely taking computing architectures out of the window.
Symbolic AI started off as a small subset of programs that could you could play checkers with. Back then, this was hyper limited and this 'AI' was essentially just random choices being made from the other end.
This eventually quickly snowballed into the 1960's when a german american computer scientist, Joseph Weizenbaum, developed a computer program known as ELIZA. This program essentially acted as a automated mock therapist to experiment how computers could eventually "talk" to another person or simulate thinking on some level.
This program was designed by taking canned responses, spitting them out while repeating certain keywords of your prompt back to you, all in an effort to create the illusion of "understanding". However, the program didn't understand anything, it simply just spat out random responses based on user's keywords.
Joesph handed this program to various other people and had them 'talk' to Eliza. The results of this were asounding and scary. Joesph noted that the people tended to associate human like emotions to the program, or a sense of 'understanding'. Even his secretary fell for the trick, despite it being hollow. It would often take them hours of conversation before they realized the program had no understanding to their utter sorrow.
But while they were convinced this program was 'alive' in someway, this led to a bunch of castrophic decisions particularly from the clients themselves. One person disclosed highly classified information to the program in hopes of searching for some answer. Others became near emotionally dependent on the program. Keep in mind that this was many years before the personal computer was really a thing.
If you've never used ELIZA and wondered what was so convincing about it, you should try a implementation of it here.
Even if you can see right through and understand how and why the system is completely hollow, you can at least have some understanding of why this might of tricked people. Keep in mind that this program was invented way before computers were mainstream, so anything that was 'computer related' was pretty much sorcery to the average person.
Fun fact about Eliza, (as you may have already guessed) this was the first program that was able to do The Turing test.
Despite how entirely empty these systems are, they can do a surprising amount in terms of being able to simulate intelligence in a way that tricks people.