Who Coined The Term Artificial Intelligence And When

What is AI?

In short, artificial intelligence (AI) is anything that can ‘think’ like humans do. Modern technology has made it possible to create systems that observe data patterns and automatically control other devices or computers using information derived from those patterns.

For example, let’s say we have two sensors; one that observes people in a room and controls the temperature using rules generated from an algorithm…and the other doesn’t. The system with no sensor keeps the heat or cool temperature as assigned by the occupant, but if there’s a person inside the room, they’ll notice the difference between having a human around and not. It will feel more comfortable with air conditioning, for instance.

The same concept applies when I want to play a video game, my smartphone makes this possible by giving me access to sensors such as the presence of light and sound.

To experience what AI feels like, just try playing a computer video game. There are hundreds of games you can choose from which require little to no learning time. You can also find plenty of free educational games too.

Early pioneers of AI

Although no one person is responsible for the invention of artificial intelligence, several people contributed to its early evolution.

The term “artificial intelligence” was coined in 1956 by J. C. R. Licklider at MIT with I. A. Watson as editor of Communications of the ACM. According to Gordon Moore’s law, computing power grows exponentially and customers require only that computers be increasingly powerful and inexpensive.

How does AI work?

In general, artificial intelligence (AI) works by using algorithms to study data and then use that information to make decisions or perform tasks.

There are several ways people can describe how AI functions. One way is to think of it as a set of rules coded in software. It uses these rules to analyze facts and come up with conclusions.

For example, if there’s data indicating that sunflower seeds are an awesome food source, it will use that information to try and persuade you to eat more sunflower seeds.

A second approach would be to call it machine learning. This is where computers learn what they need to know via trial and error and then apply those lessons when new situations arise.

So, for example, if we take the earlier case of the sunflower seed, this is something that has been learned through past experience. By storing all the knowledge in one place, even if it’s not very much, the computer could then refer to that information whenever it makes a decision about whether or not to recommend eating sundried tomatoes.

Yet another way to describe AI systems is by saying they “think”. We might also say that humans think, because we create our own mental models, but technically, humans are too complex to fit into either category.

The rise of artificial intelligence

Today, we are entering a new era in technology. Computers with extraordinary speed and ability are no longer science fiction fantasies but real things at our fingertips! IBM PC computers were first used in 1946. It took decades for their power to increase significantly and only after 1995 did they become fast enough to be useful.

Even if you don’t use them, you probably still have a computer in your home. Other examples include your phone, tablet, or laptop. All have some level of artificial intelligence (AI). Such is the pace of change that human beings struggle to keep up. New technologies develop so quickly that it becomes difficult to predict how they will affect us.

Some predictions can seem uncertain. Only last year, most people didn’t know what “robots” were. Now, they are everywhere—in factories, offices, and homes. What other inventions could be out of place by 2020?

Hollywood’s love affair with AI

The use of artificial intelligence (AI) is rapidly expanding in science, media, commerce, healthcare, technology, and other industries. From computer programs that organize photo galleries to algorithms that predict loan approval rates, AI is moving beyond its tech industry origins and into everyday life.

Media hype has fueled an obsession with AI among investors and consumers. Some companies even consider AI their “next big thing.”

Yet widespread adoption of AI remains limited about ten years after it was coined. Experts offer several explanations for this delay. Most significant, perhaps, is the lack of trained professionals who can implement intelligent software.

There are already so many unknowns associated with natural language processing, machine learning, data mining, and other technologies under development that we’re still figuring out what the limits will be.

Alongside technical limitations, cultural barriers prevent efficient implementation of AI. As with any new tool, processes must adapt to accommodate AI, from improving efficiency to changing workflows to adopting new standards and protocols.

Some skeptics say these changes could go too far and potentially disrupt the social fabric of communities and businesses. Others note that overreach is a common theme among those who first popularized the term “artificial intelligence” (see below).

Baking science into products

In computer science, artificial intelligence (AI) is technology that makes computers achieve actions without a human controller.

Since AI programming requires a lot of work to make the computer understand what humans want it to do, programmers have invented ways to simplify this process by giving the computer “rules” for doing things.

These rules are called algorithms. Broadly speaking, there are two kinds of algorithms: optimization algorithms and decision-making algorithms. Optimization algorithms are used to find the best solution to a problem, such as finding the fastest way to get from one place to another.

Decision-making algorithms are common tools in machine learning and data mining technologies, which help computers learn about their environment. For example, a robot might need to decide how to respond when someone opens its lid; it could choose to wiggle its arm or try to push away the object.

Machine learning and data mining also use decison-making algorithms to predict behavior or performance metrics, like accuracy rates, based on variables.

Autonomous cars

Fully autonomous vehicles are currently being tested by several companies, including Google, Amazon, Tesla, IBM, Intel, Nokia, Samsung, Mercedes-Benz, etc.

Autonomous vehicle technology is definitely useful, but it may be some time before we see fully autonomous vehicles in our lives. The size of these vehicles would require an enormous change to all our existing infrastructure (roadways, parking lots, gas stations, truck stops, battery charging facilities, you name it).

Also, the cost of this technology is still relatively high compared with the number of automobiles sold. For example, if you include the cost of the self-driving system in the price of your automobile, then the car will no longer be competitive versus its non-self driving counterpart.

Another concern comes from the fact that there’s not much regulation or guidance regarding how such systems should work or what they should look like. What’s considered “best practice” when it comes to automated driving systems? Is it even reasonable to expect drivers to operate these vehicles without significant training? Or might regulations need to be adjusted to account for machines taking the wheel?

Self-driving cars

The rise of self driving vehicles (also called autonomous, auto, or AI driven) is one example of how artificial intelligence is becoming more prevalent in our everyday lives.

Companies such as Google, Tesla, and Uber are working hard to make self driving cars a reality. These companies have invested heavily in R&D, built their own laboratories, and hired many different scientists who have proven they can create effective self driving software.

However, all of these vehicles still require someone to drive them. Although this might not be necessary if you live in an area that’s completely safe, there are still some situations where human error could cause death.

For instance, studies show that people tend to speed when they perceive something dangerous. In fact, perception is a bigger factor in causing accidents than alcohol concentration.

This means that without a driver, the vehicle could decide it needs to change speeds versus what the actual situation calls for. One example is going below 30 mph instead of the recommended speed limit.

Another reason why self driving cars aren’t widely accepted is because most people don’t know enough about them to determine whether they should ride in them.

Amazon’s Alexa

Soon after Alexa was made available to all consumers, people started asking whether there were voices associated with it. As technology progresses, so does the ability to create digital versions of things that already exist. In this case, it is human voices.

Creating AI-powered virtual assistants like Alexa requires enormous amounts of data in order to learn how humans speak. According to The New York Times, one of the companies responsible for creating these virtual assistants uses over 160 million hours of voice recordings from around 70 different speakers in its training database.

That might not seem like much, but it is enough to give each assistant a very specific speech pattern and no other vocal ticks.

About The Author