Thursday, June 16, 2005

please state the nature of my programming....

I would like to share some of the questions posed during the public talk titled "Hitchiher's Guide to Artificial Intelligence" at the Dana Centre which I attended last Tuesday:

Are we able to create machines with artificial intelligence? Are we able to create machines with artificial emotions? What is the difference between the two? Are both achievable?

How should we define 'emotions' so that it would not have any link whatsover with human neuroscience or dependent on any human qualities?

Why do we need to give machines emotions?

How does the communications among people differs from the communications between people and machines? What is the fundamental difference? How do we address them?

Are we comfortable in trusting machines that could make autonomous decisions? How much confidence should we put into machines?

Do we realize that by steadily relegating all of our tasks and responsbilities to machines, we are losing control over the things we could and could not do? In short, by having greater control over what the machines do, we are losing our control over it?

Does replacing people with machines means that we have to make machines more like people? Do we want machines with personalites?

Why are we inclined to have a pessismistic view or vision of machines and robots?

Why do we need to make robots with specifically human aspects? Should we make such robots?

Personally, I think the underlying issue behind all of these questions goes back to the issue of control; whether we are willing to give it up by gradually assigning tasks to machines and at the same time, the desire to retain it. Unless we are prepared to give a firm answer to this question, I think aritificial intelligence research will not be able to progress much.

In addition, I think there is an element of 'arrogance' thrown into the issue as well; what if the machines proved to be more superior than us? What happens then? What happened when the created superseeds the creator? I think all of us have this primal fear and this has occasionally prevented us from moving forward.

But then, this is where the issue of awareness comes in; robots and machines might be able to mimick human expressions but do 'they' know why they need to do so? In short, do they have the awareness of their actions? Or are they just mindlessly copying our actions and responding within the pre-programmed parameters?

This would lead us to the issue of learning; are machines capable of learning? Are they expected to learn the way we humans learn? As a matter of fact, how do WE actually learn? Is learning a sign of intelligence?

The question of learning will bring us to the question as to whether machines think? For instance, a thermostat knows three things: it is too cold in here, it is too hot in here, it is just right in here. Should those actions be categorized as thinking?

Clearly, the issue of the artificial intelligence raises more questions than it seeks to answer; I believe that the answer to these questions lies not in a single field of knowledge, rather its answers have to be drawn from many diverging fields, from neuroscience to biochemistry, psychology to linguistics. Its answer is the result of confluence of different fields of research; it is a truly interdisciplinary endavour.

One final question to ponder at:

What traits/qualities a machine should have or exhibit to convince you that it posseses artificial intelligence?

1 Comments:

Anonymous Anonymous said...

before we attempt to answer these questions, it is much more important that we first know the true nature of man.

consider reading this book:
Prolegomena to the metaphysics of Islam
by Syed Naquib Al-Attas
particularly chapter 4: The nature of man and the psychology of the human soul

5:46 PM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home