Home > AI & Cognitive Systems > Understanding AI and Cognitive Systems – a Perspective on Its Potential and Challenges While Putting Them to Work with People

Understanding AI and Cognitive Systems – a Perspective on Its Potential and Challenges While Putting Them to Work with People

Level-3: Act Autonomously

Systems at Level-3 process input data, use a model of what a person wants to achieve (goals) and takes action based on choices that will maximize their chance of reaching them. The input can be textual, audio or video data. The goals can be that of achieving something or maintaining a condition. The output is a prescription, which can be simple or complex depending on the uncertainties the system models. The outcome depends on how much the person has delegated the system to autonomously act.

Technologies used by such systems go beyond Level-2 and include planning and execution, open world reasoning and robotics (i.e., hardware alone, software alone or both). They help a person conserve their energy away from mundane or unsafe tasks and really focus on tasks that matter. Autonomic computing was a push in this direction for IT systems that could self-manage [8]. Level-3 systems are susceptible to spurious data, flickering human goals, changes in the environment and legal issues like liability. One may notice similarity here akin to factors under which a secretary one hires may be considered working unsatisfactorily or is empathetic. For example, a Level-3 system may help a hard-working person by setting his meetings. But if the person gets stressed over time due to exertion and turns suicidal, who should be blamed? Was the secretary abetting the incident or failed to warn the person’s family of impending harm, both of which may be punishable in some countries? Does this also make the software creator liable? A critical scientific challenge in Level-3 systems is to set the right level of autonomy [9] that delivers value balancing risks.

3.1 Discussion

So, given the cognitive buzz, are all software cognitive? Any software takes inputs, processes them via algorithms and produces its results. A cognitive software is one which is able to identify intelligent patterns from input data, preferably interacting with other people in a natural way to help them make good decisions and have an ability to act independently. We will argue that any software that does not make to Level-1 is not cognitive. This will include many of the off-the-shelf software that have rigid behavior based on narrow inputs. A calculator should calculate the mathematical expression correctly and that is perfectly fine. Cognitive software just represents a new wave of systems that promise to deliver unprecedented benefits for the data-rich world if we can leverage the positives.

AI systems have been built across the three levels but most of them are at Level-1. World -class champions beating game playing AI systems like Watson and AlphaGo are at Level-3 and their underlying technologies can come handy in tackling serious issues like health and security. In this context, our own work [24, 25], represent building multi-modal chatbots to explain and engage people in decisions related to water usage and exploring exoplanets (astronomy), respectively.

Advanced systems also come with a host of issues like how to judge their performance [10]. Further, like any other advanced technology (say nuclear energy or guns or gene therapy), the litmus test will be the goals towards which we are able to apply this technology – help mankind grow peacefully in balance with nature or create disputes, economic distress, joblessness and harm. And that will be the eventual mark of success for AI and cognitive systems.

In the next two sections, we will discuss two case-studies of how AI is being applied to real world challenges.

 

4.0 Case 1: The Self-Driving Car AI

“The genius of Einstein leads to Hiroshima.” – Pablo Picasso

Consider the scenario of self-driving car. It is a vehicle that can navigate autonomously on public thoroughfares autonomously, and hence, without human intervention.  Self-driving as a technology has already been tested in space missions decades ago [16]. Let us explore how the technology may work among people on earth?

First, let us consider what problem automated cars solve and whether it is worth solving. Ever since cars were invented, people have learnt to drive them or have hired drivers who can drive for them. As cars became complex, the skill and focus needed to manage cars has increased. In many cases, people are not able to drive (due to age or cognitive challenges) or willing to drive their vehicles (due to inconvenience) or afford a driver. But is removing a category of job, drivers here, the right problem for scientists and engineers to solve when there are so many other pressing problems? The author is of the view that any technology which removes a person out of job is bad as it adversely affects families in the long run and creates social tensions. Economist have often argued to say that the displaced drivers can be retrained for better jobs but that argument has repeatedly proven hollow in other jobs. Instead of riding on the slippery slope of jobs discussion, couldn’t door-to-door transportation be made cheaper and more convenient with novel resource sharing ideas (e.g., vehicles and thoroughfares) that boost employment or technologies that augment human cognition and make transportation safer?

Second, let us consider the speed at which underlying technologies can be brought to people. Without claiming to be an expert that knows all aspects of an automated car, some subsystems are clear to an engineering eye: one has to detect car’s environment, control the vehicle, plan a route to destination, engage humans and know about their satisfaction with the drive. Each of these sub-systems are immensely valuable if they were available to drivers today to assist them while they are driving. Unfortunately, with the exception of route planning, none of the other sub-systems are mainstream in today’s cars although they will be immensely useful. Can’t they be accelerated to market saving lives and ushering in more convenience?

Third, let us consider the risks of automated cars. Any human created artifact has defects which may or may not get detected, and if detected, may or may not be rectified; and cars are no exception. In 2012, approximately 7.2 million motor vehicles were sold to customers in the United States while worldwide, car sales came to around 65 million units in 2012 [17]. NHTSA reported that 51 million cars recalled in 2015, adding that every year, on average, 25% of recalled vehicles are not repaired [18]. The numbers reveal that today’s cars are not defect-free and when defects are detected, 1-of-4 are not corrected. These defects stem not only from the complex supply chain needed for a modern car but also from intentional cheating and short-cuts adopted by car manufacturers as revealed periodically. Since driverless cars are not inherently different transportation technologies but only conventional cars with automated control modules, one may infer that future cars will continue today’s trends of defects including worryingly, in their newly created control modules too. This raises the specter of buggy automated cars playing havoc on streets where other automated and manual drivers are present apart from pedestrians and other occupants of road. One can dispute on the exact statistics but it is clear that cars will not suddenly break decades of recall history without any drastic manufacturing advance and suddenly become defect-free.

Under the cloud of car defects, let us consider who will take responsibility for a car’s actions. Suppose A buys a car from manufacturer M and B rides on it. (B can be A himself or his wife or children or parents or friend or a burglar who has just stolen the car.) The car hits C. Will C sue B, A or manufacturer for damages? The problem may be the way B gave instructions, or how A maintains his car or how the manufacturer built the car. Today, a driver of a car is responsible for whatever happens to a car unless the driver is below adult age in which case, the legal parent or guardian is responsible or the driver can prove car’s defects in which case the manufacturer is responsible. The same accountability is needed for a victim of automated car’s mistakes. The conundrum will be solved if the car’s control module was only assisting the “driver”, a person who anyway today is legally in control of the car for driving purposes and can override any assistance. The accountability issue is unresolved today and has the potential to cause grave social costs.

The scenario, however, is not exotic. In the recent case of Tesla’s AutoPilot, where a person lost his life when he delegated autonomy to the self-driving module, the system was found to be performing correctly but its capability (and limitations) was not properly communicated to the driver [19]. The accountability issue in Tesla’s case was tossed between the driver and manufacturer, and a regulator had to investigate.

The author considers self-driving as a fledgling solution looking for a problem because when applied to mundane road commuting, it can lead to job-losses and economic distress in the short run and unknown outcome in the long-term. However, it has tremendous potential when applied to Health which we consider next.

5.0 Case 2: AI for Wellness – Health, Safe Water and Air

Health is intrinsic to human life. Unfortunately, countries around the world are facing major health challenges which gets exacerbated with deteriorating environment – water and air, limited resources and management issue [35]. The UN millennium development goals [13] had health among its list and AI can play an important role in meeting health goals.

Taking the case of a developing country, India promoted a new health care policy in 2017 [34]. Its goals are – “the attainment of the highest possible level of health and well-being for all at all ages, through a preventive and promotive health care orientation in all developmental policies, and universal access to good quality health care services without anyone having to face financial hardship as a consequence” (page 4, sec 2.1).

India promotes healthy living by taking care of medical needs cost-effectively. However, it does not directly consider issues like access to food, sanitation and environment pollution. Furthermore, the government investment in health is quite low (3.9% of GDP; 30.5% of total health expenses– 2011 estimates [35]) causing implementation challenges.

Now consider the issues a citizen may face related to wellness. They include:

1. Access to health services: depending on the context, a person may need a planned or unplanned service at or away from their regular location. Even if the service provider is found, they may not be affordable, fully equipped or qualified to deal with the need.

2. Medical Diagnosis: diseases are ever-evolving, procedures change over time and it is hard for medical practitioners to cope up.

3. Maintaining healthy living: When healthy, citizens need to monitor their food, exercise and daily routines in harmony with environment, while if under treatments, they additionally need to follow-up on medical advice and medicines. Some diseases make a person’s health prone to weather swings and conditions of air or water, curtailing their activities.

4. Financial prudence: A citizen needs means to plan for cost of health services including insurance and utilize tax incentives.

These challenges are also opportunities for cognitive systems to help people. Indeed, many AI-based digital assistant and chatbots are being built to meet them [39]. In one interesting study [36], it was shown that a chatbot called Jane.ai is able to help people follow their health regimen better than alternatives.

Such systems can also be useful in helping people work closely with environment. Taking the example of water, it is a unique resource vital for all life to survive. We illustrate some personas and water usage for wellness. Consider that Abhay may want to take a bath in the river during a religious festival and would want to know which banks (religious sites, i.e., ghats) of the river are feasible to go without getting sick. Bina may want to tap ground water or fetch from river water for household activities. Chetan may want to use river water for irrigating his fields over ground water. Divya may wonder if fishing or vegetable growing is promising on the river catchment area to supplement her family’s earnings. These and other users are routinely taking decisions which can be driven by water pollution data if it were available and AI-researchers.

Unfortunately, the world’s water resources are facing unprecedented stress due to increasing population and human economic activity. In [24], we showed a multi-modal interaction system called Water Advisor to help people explore and engage on water issues consisting of a chatbot, a map interface and document viewer. A person can select an area, pick an activity like swimming and see if water conditions are amenable for the activity based on available open water data and quality regulations.

More generally, a large part of oceans and rivers are unexplored, and self-driving technology in form of underwater drones (discussed in previous section) can also come in handy [16]. This will not only address a problem that matters to mankind, but will lead to long-term economic gains in areas like shipping, food supplies and recreation. It is a challenge problem worthy of attention from top leaders in technology, business and government.

Pages ( 3 of 4 ): « Previous12 3 4Next »

Leave a Comment:

Your email address will not be published. Required fields are marked *