MBA

Masters

Admissions

GMAT

Business Schools

    Rankings

    Jobs

    Watch

      Resources

      Insights

        What Is Responsible Artificial Intelligence?

        Warwick Business School experts are working on developing new systems that could mitigate the risks of AI—a powerful technology that impacts our everyday life

        Artificial Intelligence (AI) has revolutionized the way we live. Along with the growing influence of algorithms in how business is organized, these new technologies are impacting our personal decisions regarding where we travel, what we buy, read, or which music we listen to. 

        Given AI’s prevalence as an increasingly powerful technology, it is important that we trust it to be a source of good for our society. Yet, the issue of inherent bias and discrimination present in the data built into AI has been widely documented. 

        Experts from Warwick Business School (WBS) have been working on finding the source of such bias and how to minimize it. The aim is to build AI technology that can be trusted to be ethical and fair, and that may benefit society at large. That is Responsible AI.

        But what exactly is responsible artificial intelligence? 

        BusinessBecause spoke with two experts from WBS in the field of AI: Ram Gopal, professor of information systems and management, and Shweta Singh, assistant professor of information systems and management.


        Ram Gopal (left) and Shweta Singh (right)


        How do you define AI?

        Artificial intelligence is the development of computer systems that perform tasks that normally require human intelligence, such as speech, facial recognition, or decision-making. 

        “AI is essentially about systems being able to recreate human intelligence,” says Ram. 

        Artificial Intelligence has greatly evolved since its term was coined in 1956. It is shaping the future of work in many ways, from the creation of brand-new jobs to introducing robot colleagues into the workplace. 

        “We have now reached a point where these technologies don’t just replicate human intelligence but may also exhibit creativity,” explains Ram. For example, Ai-Da, the world’s first robot artist, can draw and paint using cameras in her eyes, algorithms and her robotic arms. 

        With the technology’s seemingly limitless potential, and its growing impact upon our daily lives, it is important we mitigate the risks associated with AI to benefit from the advantages the technology brings.



        What are the risks associated with AI?

        Shweta believes that to be effective, AI must possess four fundamental characteristics:

        - It should be fair

        - Ethical

        - Transparent

        - And absent of bias and discrimination

        She says that as AI has sought to replicate human intelligence it has naturally mirrored and amplified the bias and discrimination already present in our world.

        “AI learns from all the data that we provide, textual data, images, videos, or activity on social media,” adds Ram.

        “The risks are an advantage as well as a disadvantage. As humans, we may not have known the extent to which we were discriminating or being discriminated against which stems from our subconscious bias,” Shweta says. “But this also provides an opportunity for us to correct it and overcome these challenges within AI technology and in society.”

        The concept of responsible AI exists to find approaches to overcome these challenges.


        What is responsible AI?

        Organizations around the world are starting to recognize the responsibility they have to mitigate the risks of AI on society. 

        An example, says Shweta, is  Microsoft which announced it would halt the production of several AI-driven facial analysis tools because of fears that they were open to abuse.

        “What you’re beginning to see is pushbacks in different domains related to AI,” Ram adds. “This may lead to more regulation, legislation, and put the onus on organizations that use AI that may potentially harm individuals to be more responsible.”

        Shweta along with other experts from Warwick Business School are finding ways to try and make the AI more trustworthy, absent of bias and discrimination. They hope to build an extra layer of artificial intelligence that will sit above current models to build trust in the form of a technological fix. 

        The new added layer of responsibility includes designing AI that can detect harmful content, including cyberbullying or hate speech. 

        Another way AI can be more responsible, says Ram, is by ensuring the data you feed into the system is more representative and balanced by including adding the voices of disadvantaged minorities and women.

        “The bias present in society is being amplified by AI,” says Shweta. “What responsible AI will do, when it reaches its final stages, is drive a society that may be closer to being bias free. A society that thrives on equality, fairness, transparency —and is absent of discrimination.”

        Over time, the pair argue that artificial intelligence will become more trustworthy and responsible. An increasing number of regulations and legislations are being put in place to regulate the use of AI and mitigate the risks attached to it.  

        “Al is touching all of our lives in almost every activity, and we are now seeing pushbacks by lawmakers to mitigate the risks posed by AI,” says Ram. 

        Comments.