AI

Can artificial intelligence be wrong?

Alex Alex 25 June 2020
Can artificial intelligence be wrong?

The use of Artificial Intelligence (AI) and machine learning is taking over the public right now, seeing tests of everything from chatbots, intelligent assistants and AI case managers. But technology also presents some ethical dilemmas that we must address.

There is no doubt that there is a huge potential in using AI to find patterns in case handling, identify crashes and errors or to handle citizen inquiries quickly and efficiently.

In a recent Microsoft report, 95% of surveyed officials also believe that artificial intelligence can help alleviate public pressure. So that's for good reason, when 63% of public authorities invest more in new technologies than before.

However, with AI that are programmed to be more or less self-learning, and which can collect, analyze and provide answers on their own, new challenges emerge, however, that we must address.

Must an AI also fail - and who is to blame?

We have long accepted that people, no matter how skilled and professional they are, sometimes make mistakes in relation to data collection, analysis and subsequent decisions. Because, after all, none of us are infallible.

The question is whether we have the same acceptance of error when it comes to artificial intelligence or whether we expect the infallible?

We can start by stating that whether it is humans, mechanics or software, errors will occur. It cannot be avoided. However, all experience suggests that the number of errors decreases quite significantly when using self-learning software to analyze millions of data and constantly optimizing its output.

We have, among other things, seen in relation to health, where AIs are already better than specialist doctors at detecting cancerous tumors on scans, and thus can help ensure that more people are treated well in advance.

But of course, AIs can and will overlook things, too. This was also seen in the above study, where the AI ​​actually hit 94% at a time. But does that mean we should not use the AI, as it still makes mistakes and ignores cancerous tumors?

Who is in charge and what to do when artificial intelligence makes mistakes is one of the key debates we will have to face.

If a doctor makes a mistake in an assessment or treatment, then you can complain about the doctor. But who is to blame when it is a computer that made the mistake? Is it the doctor who has chosen to use the artificial intelligence, the vendor who has built solutions, or a whole third?

If we do not get to that discussion, there may be a fear of touching a technology that could otherwise help improve public service and ultimately save lives.

Can a robot look over your shoulder?

The ability of a software robot to look through thousands of data, put it together and analyze it happens faster and far more efficiently than any human can. Data is a great basis for making better decisions.

It's not the technology that's stopping us here. It is possible to make decisions based on close to infinitely large amounts of data about the individual citizen. The question then is no longer whether you can, but when to do it, what data to use and what basic rights the citizen has in this regard.

Artificial intelligence can undoubtedly help us to better assess a case of child removal, to decide the right to housing benefits and to find the right treatment for the disease.

But how much data and what data is it reasonable for the public to collect and used to make better decisions? There is probably no easy and definitive answer to that, which is why this discussion is so important to take.

There is no doubt that the potential for AI is huge. Indeed, Capgemini predicts that the use of AI could increase global GDP by as much as 1.4%, equivalent to DKK 27.7 trillion already in five years. So you might be tempted to say: What are we waiting for? Let's get started.

But we need to have an open and ongoing debate on how to best use artificial intelligence. The biggest barriers to accelerating use are not the technology and the results, but the need to balance the opportunities with confidence and confidence, so that citizens feel that their right to privacy and data protection is secured on the journey.

Comments (0)

    No comments yet

You must be logged in to comment.