The Role of Trust and Interpretability in AI

The Role of Trust and Interpretability in AI

The real estate and property industry is embracing AI with enthusiasm, but how do we as proptechs protect the industries we serve from the well-documented risks and downsides?

Proptech Leader of the Year Sarah Bell discusses the key themes in her Doctoral thesis, Uptown, and explores the responsibility we all share in both growing and scaling AI with transparency. 

Sarah Bell

Before I start, I want to acknowledge country elders and the traditional knowledge systems of our First Nations, people that are vital to a shared and sustainable future. And I'll start by tackling one of the least interesting topics in AI, and that's how to define it, because I've had this conversation with just so many people in the room. And I think when I talk about AI in my research career, I like this definition of Winston. Winston took over from Minitsky as the director of the MIT CSAR Lab. And his definition is deliberately technology neutral. He doesn't talk about machine learning. He doesn't describe deep learning. He doesn't talk about any specific tool or technique.


Sarah Bell

He talks about algorithms enabled by constraints exposed by representations that support models targeted at loops that tie thinking, perception, and action together, which your customers won't care about at all. And so the next best definition of AI that I use in my practice career is Winston's other definition. And that is the computers are able to imitate human thinking, perception, and action. And I love this because 150 years before him and I think it is this capability or this conception of all of the other tools that we're used to in terms of technology and it brings it into the domain of pioneers. We are pioneers, and we face, therefore, new predicaments in the work that we do. And so my research career is really based on two of these predicaments. So the first one is what Brewer calls not you, Peter Brewer different. Brewer calls the predicament of knowledge.


Sarah Bell

How do we process the vast amount of information and knowledge that exists on the planet today? And the realization is that we simply can't, not without the advanced software, artificial intelligence, faster quantum computing. And so if that's the answer to the predicament of knowledge, then it gives rise to what Humphreys calls the anthropocentric predicament. And I'm sorry for all the big words, because this is our thesis. But the anthropocentric predicament is how do we, as humans understand and trust something that we don't understand? And when I say we, I don't mean the absurdly, technically proficient cohort in this room. I mean we as in the collective, we the general public how do we trust things that we don't understand? And in terms of this problem? And who cares about this from a prop tech community of practice, from practice context, I think it's really important to understand that the link between trust and adoption of this type of technology is profoundly empirical in the data.


Sarah Bell

When knowledge exists that is specialized, that is highly technical, we need to understand that access to that information or understanding is barred to a lot of people by opportunity, by capability and natural intelligence. And by access. And so when that happens, we face issues of epistemic. Opacity is the word. We have to be brave enough to understand that a lot of this technology is hidden away from most people. There's three issues with epistemic opacity that I think we have to be brave enough to confront when we're trying to implement this type of technology. And the first one is the black box. People talk about the black box being a really big problem for people to trust technology, and they talk about transparency as a basis of trust. They want a white box or glass box is what's kind of referred to in the literature. But for those of us in practice, we know that the black box is one of the very few things that will protect the commerciality of your idea.

Sarah Bell

IP laws are kind of grossly lacking. It doesn't take a lot to kind of tamper with someone's idea a little bit and nick it. The other issue with the black box and transparency is that what we're talking about is quite complex. Samak talks about the issue when we put the onus on developers to explain their models, and that sometimes they are so complex that they are unexplainable that the people responsible for architecting these systems cannot explain them. And then Burke talks about this thing called the AI paradox. So the black box is a protection from criminality. The more explainable or transparent we make our systems, the more vulnerable we make them to bad actors who, once they understand them, can infiltrate them. So the black box is kind of the first problem, and practitioners and legislators will have very different approaches to the black box.

Sarah Bell

The second is this notion of privileged knowledge. We can't forget that it is a privilege to understand this technology. In 2019, Finko, who's the chief scientist of Australia, said that there was an estimated 20,000 people worldwide qualified to a PhD. Level in an artificial intelligence related field. And we now kind of estimate that's probably 40,000. That is a very small scientific community, and most of them are working in private practice. So Finkel talks about it as a priesthood or a cabar. And I think Nigel's Tinfoil kind of mafia description this morning was fantastic because it really showed the issue of kind of explainability. When you have such a tiny community of appropriately qualified professionals, the issue with explainability becomes problematic as a basis of trust. And so one of the ways that we get around it is through anthropomorphism, another $10 word. But anthropomorphism is what we call it when we ascribe humanized characteristics to objects so that we can understand our relationship with them and certainly how they form and how they function in a world in relation to us, but in a position that is slightly more elevated than an object.


Sarah Bell

So this humanization has been really important in artificial intelligence for mass market adoption. To answer that predicament of how do we trust things we don't understand? Well, we describe them as human. We make them human. And so in my research, I use the word artificial actors. So when I'm talking about humans, I talk about social actors. And when I talk about robots and software bots and things, I talk about artificial actors. And so we kind of need to recognize that this Siri, Kotana, Rita, this fiction of humanized actors is something that we created. It really is a phenomenon of this type of artificial intelligence. A fiction that we created to help people understand that which transcends their own capabilities. I know when we started our journey with Rita, so Rita, seven years ago, no one was talking about AI, not pre Chat GPT. Kind of siri was still stupid and she didn't understand, right?


Sarah Bell

Going out to real estate customers, trying to talk to them about how were using algorithms to move tracks of data around the cloud. It was super unexplainable. And also a lot of the AI and reader is not in the interface. Like, chat bots were so stupid back then that we would never have dreamed of putting language in between one of our customers and their customers because it's so unreliable. So a lot of readers artificial intelligence is in natural language processing engines that you kind of can't see. So we created Rita. It's a lie, it's a fiction. It doesn't exist. It's disappointing to customers who have birthday parties for Rita. My job title was Rita's Mum for so long. If I got a support request back in the early days when I was also support, I would say things to customers like oh, go give her a smack.


Sarah Bell

And they would be like, don't do that. Go smack Ian. But it's this where things get really muddy. And it kind of in my research career made me kind of reflect on was that the right thing to do? Because as technology evolved with artificial actors there came a point where weren't simply getting them to automate mundane tasks, right? As the technology became more sophisticated, what were asking these artificial actors to do is make sophisticated decisions in a black box that really impacted human agency. Who gets a home loan, who gets a rental property? And now we've got generative AI making what we would call intellectual works, but we don't know how. And so Bodo argues that these technologies need to be trustworthy. Not just for the users that rely on the tools, but because these tools now dictate how humans trust each other. Our relationships and our communications are intermediated by artificial actors.

Sarah Bell

So I'll fast forward through 5000 hours of research and ethnographic field work my findings, which I've summarized into three themes to feel great relief. So the first one of these is that risk is an embedded element of a technosocial paradigm. So when I talk about technosocial paradigm, I'm talking about how our connections to each other are technical first and social second. I don't talk to my family. I send photos on Facebook messenger like everyone else in the room. And don't pretend you don't. The WhatsApp group chat is how you connect with all your friends. No one's got time to go out anymore. That structure of our social relationship is technical first and that's the opportunity that we all have. We all collect digital breadcrumbs from our social connections being digital, and we're fueling our businesses with those digital breadcrumbs. And that's the opportunity, but it's also the risk.


Sarah Bell

And what I find fascinating about risk is that up until kind of modern times, we didn't have a concept of risk. So relatively recently in human history, we became responsible for what happens when we take action. Before that, it was God's fault. Like if Raf, if your business was going to be successful, that was because God liked you and if it went the other way, sorry, none of that. You had no control. And that was all predetermined and pre written. So it wasn't until we kind of changed our thinking around that language, like residual and risk were introduced into our language because we became responsible for our choices. And that's probably not something we talk about enough in terms of how we frame this. But I think this replacement of our ancestral concept of danger with risk, that the possibility of future damages, which we will have to consider as a consequence of our actions today without all of these foreknowledge, require risk.


Sarah Bell

The rationalities we have today will in particular, so far as they involve others especially, require trust. And so that gives us a new type of anxiety about future outcomes of present decisions with a general suspicion of dishonest feelings and things going wrong. I'm going to let that haunt you like it does me and move on to my next theme because we don't have infinite time. But the second theme of my research was that trust in pro tech can be encouraged by trust architectures that are understandable. So when I talk about trust architecture, what I'm really talking about is accountability in our human system. It's recognizable in our social world in legal frameworks, the way that criminal codes and torts apply to legal persons like governments and companies and people. And then we also have social norms and kind of mores for where the law stops.

Sarah Bell

And we call that answerability this moral answerability. So I've got a mate who we call the Brendo and when he comes over, he brings a carton of beer but takes home whatever is not junk. There is nothing criminal about that. But we feel very strongly that there should be answerability, social answerability to Brendo's actions and the social consequences that kind of exist where the law of stops are really important. But none of those systems apply to artificial actors. And so even though we are encouraging people to kind of consider these objects this technology as humanized, we have to recognize that we don't treat them that way when it comes to accountability and answerability. And one of the things I love is that we don't just give them names. We design this technology to look like the humans we know and love. This is my delicious late in life baby son Banksy.


Sarah Bell

And this is the android robot that Peppa make or called Peppa. And he's been deliberately designed with big eyes like my toddler and soft, kind of curved and chubby bodies so that when Pepper kind of inhabits the physical environment that we share with know we don't kick him over and steal his iPad because he looks like our toddler, right? The skeptic in me the kind of criminologist from my behavioral science undergrad who did a lot of work around answerability and disproportionate sentencing between males and females and in between pretty and not so pretty females wonders if the representation. The way that we feminize the names that we give to these robots, the way we make them look feminine and young and helpless, also has a role in potentially delaying or avoiding answerability, which is probably another thesis. But the reality is that in our current system the way that we treat these actors is in line with objects under product liability.


Sarah Bell

So product liability traditionally relies on a plaintiff being able to explain the breach, which is very difficult to do when you don't know how something works and that it wasn't intended to work that way when it's hidden behind a black box. So although we have product standards and mirror standards and all of these things, we don't always know when it's not working as intended. And sometimes we consent. Like when OpenAI gives us a ridiculous GPT outcome we've kind of accepted that will happen. And so there's no accountability yet. So Elish talks about this thing called the moral crumple zone. And that's where humans, our social actors, are filling that gap in between the accountability structures that exist for objects and getting crushed in the middle. So, interestingly, recommendation eleven of the Australian Human Rights Commission report on AI informed decision making is that when an artificial actor makes a decision strict liability ought to apply to the human who has delegated the decision.

Sarah Bell

Which is quite interesting if you're a decision maker delegating that responsibility and holding on to strict liability for it if you don't know how Bing is making the decision. The European Parliament is currently discussing a risk classification framework. And from what I understand, this is kind of the way that Australia seems to be nudging and Darling argues that we have quite a similar paradigm already in our law for that. So if you look at how we use animals to do jobs, to outsource jobs to animals I grew up on a farm with cattle dogs that you would train and the analogy kind of checks out like a good cattle dog. Costs about $20,000, but they probably replace about four people. But there is a risk classification framework depending in the animal controlled act, depending on what type of breed it is, what job it will be doing, and then depending on the risk, it determines the liability for the owner.


Sarah Bell

So if you want to have a staffie and make it flight, you will have strict liability as opposed to different treatment of a different breed. My final theme is that trust in prop tech can be the result of experimentation. And I'll pause there because I think what's been remarkable about the shift from AI, from sort of the techie fringe to the mainstream with Chat GPT has been this ability for your kind of everyday person to experiment, to understand how inputs affect outputs, and they could play in an environment where very little is at risk, right? There's a very high kind of tolerance for storic error that comes out of Chat GPT. But one thing that was quite clear in my research is that the degree of fear and the impact of an error would impact trust and our appetite for risk. So it's kind of a banal example, but I would like to say I'm one of those appropriately qualified members of the scientific community, but if you think I would let a robot cut my hair gravely miscalculated the factor.


Sarah Bell

Of trust that I would risk in the machine and gravely underestimated the factor of impact to me when it comes to my hair, but perceived credibility as an insulating factor. Similarly. So the example I use is that if Apple created a Salon 5000 and Beyonce endorsed it, I reckon I'd give it a go. And I do that because I trust Apple, because it's performed consistently well for me over time. I didn't have beef with Apple. I love it, and Beyonce is my fantasy best friend. So it's why I call out perceived credibility as opposed to actual credibility in this, you know, throughout this theme, this notion of interpretability that users are empowered to understand how inputs affect outputs and make their own judgment about whether to trust and adopt technology has become paramount. And it's funny, when as I started this research journey and read many guidelines about ethical AI, there are so many, none of them, of course, are required or mandatory.

Sarah Bell

But I was determined not to produce another preachy thesis like the world didn't need another kind of paper on ethical AI for the university repository or a fire somewhere. And from the outset, I wanted my research career to benefit us in this room, the community of practice who are doing this every day, who are having conversations with regulators, who say everything has to be transparent and go, oh, but why would I spend money developing technology if it's got to be transparent? And so what I kind of wanted was conversation to continue about this notion of interpretability. And so in my research design took all of the findings, and along with 14 other people from our community, we embedded those findings in a novel. Why not? Nothing worse than someone trying to flog a book. So I'll just tell you that you can't buy it. And it's also not my book like we wrote it, and I can't tell you who it was, but there were 14 of you who might declare themselves.


Sarah Bell

But Uptown is the ethnographic novel that kind of was born as a creative artifact from this very boring research that I did. And the idea was that it would give us, as a community of practice, a way to kind of have a conversation about this and keep having a conversation about this once my school project was finished and to kind of embed it into practice. So Uptown is the story of a small country's town in Australia that decides to adopt a fictional proptech called Macy, stands for Master Planning Algorithmic Community Interface. And it basically ends up being gentrification as a service. When you think about gentrification as a service, it's not that good of an idea. A lot can go wrong, as we found out. And the story sort of wrote itself, but also it really didn't. It's 80,000 words in a hell of a long time.


Sarah Bell

But the reviews from people I think are clever, like Nigel Dalton, have been quite encouraging. And so if you want to read our story that we wrote together from the community of practice, please, no one tell my boss I put a bad word on a slide, then hit me up on LinkedIn and we'll get you a link so that you can be a part of the conversation that so many clever people in and out of this room from our community started. And so as I conclude my comments today and kind of reflect on my research journey, I guess the main thing I want to know after many tens of thousands of dollars and so many hours, is, did it matter? And I think just as the introduction of machines and factories created kind of broader sociological and political implications in the Industrial Revolution for workers, for social life, we can't escape the significant.


Sarah Bell

We can't escape that there are also ramifications for this type of technology coming in and disrupting how we work and communicate with each other. So Joyce and others have identified the need for this sort of psychological theory in the work that we do and in shaping the future of artificial intelligence so that it's not removed from our social context, from our human context. Lots of contemporary voices are talking about the politics of algorithms. I think we've already heard some great discussions this morning about bias in algorithms and the politics of algorithms. I think we have to include how machine thinking can create structural discrimination, structural barriers to access, because of how it does influence and make real social decisions in our real world. So recognizing that these machines, powered by advanced software and artificial intelligence, are framed as social. I think when I think about does it matter when we ask people to accept this impersonation, this imitation of social actors, I think we also need to accept that these things are also necessarily political, I think.


Download the Ethnographic Novel,

UpTown

written by Dr Sarah Bell

It was designed to be an ongoing conversation with the Proptech community of practice so if you have any thoughts that you would like to share after reading, please reach out to Sarah at sarah.bell@corelogic.com.au