Overcoming three of the biggest challenges to evolving a more trusting relationship between humans and intelligent machines

Well, to begin with, machines should stop pretending to be so smart

– Kathryn Hume

With all of the talk of AI stealing our jobs and plotting to take over the world, what is it going to take to create more trusting relationships between humans and intelligent machines?

In less than a decade, AI has emerged from the depths of science fiction to take root in our everyday lives, automating everything from online banking apps to self-driving vehicles that glide across factory floors.

The result of several exponential technologies converging at once, (cloud computing, inexpensive sensors, advanced algorithms, big data and vast computational power) AI's ascent has been truly remarkable – and we've only scratched the surface.

Nonetheless, despite her optimism around AI, you won't hear Kathryn Hume call artificial intelligence smarter than humans – at least not yet. "Take communicating. Yes, our emotions can hold us back, but we also communicate on so many levels at once, in ways computers won't achieve for a long time. I would argue that the gentle touch of a nurse, or a kind eye towards a self-doubting colleague are far more efficient forms of communication than what may be registered with AI. And what about love at first sight?"

Hume, who is VP of Product & Strategy at Toronto-based integrate.ai, has a firm grasp on what AI systems are and are not, calling them "wonderful examples of competence without comprehension, exhibiting what looks like intelligent behaviour, but driven by pure processes of optimization over established constraints."

While AI has already proven effective in a wide range of applications from diagnosing disease to trading stocks, and more importantly, helping us select which shows to watch on Netflix, the relationship between the general public and intelligent machines remains on shaky ground.

Here are three pieces of advice to help intelligent machines improve their relationship with humans.

  1. Be more open with us

    The 'black box' dilemma refers to how certain machine learning systems, in particular deep learning systems, are notoriously bad at explaining how they make decisions. Unlike rules-based programming, we cannot simply sift through lines of code to understand a machine's actions. And that's got people feeling uneasy.

    "We have trouble trusting decisions made by such systems because we want there to be a rationale and cause behind actions," says Hume. "We think about liability in terms of agency and intention, and impose these ideas onto the systems."

    "The ideal would be for us to evolve our notions and conceptions of how to hold systems accountable, to build our instincts and muscle in processing probabilities so we can properly assess the uncertainty behind the outputs of probabilistic systems. But that feels like a tall order, and not something that will be practically achieved anytime soon. For now, I think designers need to make more of an effort to make machines seem stupid, not smart, so as to keep people from falling into a trap of believing they are engaging with an entity that thinks and feels like they do."

  2. Treat us (and everyone) fairly

    "Machine learning uses 'vectors' – or strings of numbers – to encode information about data sets, including information like how two words relate to one another to create meaning or how pixels in a photograph align next to one another to create the images we see beauty in," says Hume. "If this data set has traces of racial or gender biases, those traces will get propagated in the vector representation, and when we use it in other settings, it carries forth the traces of its past." In other words, AI systems are excellent generators of bias, finding similarities between individuals and mapping them into groups, and sometimes those groups have socially sensitive consequences. For example, AI-powered applications used to decide who is deserving of a credit card, insurance, a loan, or even a job.

    Fortunately, AI researchers, including Vector Institute's Richard Zemel and Toni Pitassi are leading research on what they call 'fair variational auto encoders,' which Hume explains as technical jargon for AI systems that are structured to retain as much predictive power as possible while obstructing information related to socially-sensitive attributes like gender or race.

    "It's a thorny statistical challenge," admits Hume.

  3. Give us some space

    And of course, people are concerned about their privacy and how organizations might use their data in ways that could negatively impact their lives.

    While holding a strong conviction that organizations need to do what's best for people, Hume also worries that overbearing regulation could smother innovation.

    "There should be a set of principles organizations use to govern fair use of AI, so they can innovate and benefit from the amazing potential of machine learning, while also doing what's best for people. Start-ups and technologists need to work closely with the policy community to ensure any regulatory decisions address the right risks."

    "Innovation with technology is such a strong creative force," says Hume. "It can provide meaning like art does. We should foster that."

Sector/subsector: 

February 13, 2018

Feedback: Was this page useful?

Talk to a business consultant


Request a conversation

Subscribe to our newsletter


Sign up now

Newsletter Sign Up

Form is for business purposes.

Back to top