Heart Work and Hard Tech: Building Fairer Systems in the Age of AI with Aubrey Blanche explores what fairness truly means when AI shapes how we hire, develop, and lead. In this episode, we unpack the human side of automation, the ethical risks hidden in today’s HR technologies, and how leaders can design systems that stay accountable, intentional, and people-centred. If you’re ready to future-proof your HR practices and rethink how AI shapes your workplace, this is your essential listening.

What does fairness look like when AI is making the decisions?

Artificial intelligence is changing the way we hire, develop, and lead, but who benefits when we rush to automate? In this episode, I speak with Aubrey Blanche, Director of Ethical Advisory and Strategic Partnerships at The Ethics Centre, about why AI isn’t just a tech issue, it’s a human one.

We explore the messy middle of ethical leadership, the overlooked risks in how HR teams use AI, and how to ask better questions before adopting shiny tools. Aubrey brings years of experience from Atlassian and Culture Amp to help us rethink what fairness really means in the age of automation.

If you want to future-proof your HR practices and lead with intention instead of hype, this episode is your prompt to pause, reflect and redesign. Because every system has a heartbeat, and it better still be human.

What questions are you asking before using AI in your people practices? What would fairness look like if you designed it today? Let’s keep this conversation going, connect with me on LinkedIn

SHOW NOTES: https://reimaginehr.com.au/resource-hub/ 

In this episode we cover:

  • Introduction to Aubrey’s work and “Math Path” philosophy
  • AI as a human, not just technical, problem
  • Ethical risks in current HR practices using AI tools
  • Gaps in ethical decision-making training in organisations
  • Parallels between AI ethics and DEI failures
  • What designing for the minority means, and why it works
  • How to approach vendor reviews with an ethical lens
  • The Workday lawsuit and what it signals for HR
  • Why identity-led DEI efforts may backfire and what to do instead
  • Environmental impacts of AI infrastructure
  • Ethical leadership in messy, complex environments
  • Training and frameworks to make better ethical decisions
  • Intergenerational shifts and the future of values-led leadership
  • Aubrey’s optimism for innovation in marginalised communities

Resources mentioned in this episode

 

About Aubrey Blanche

Aubrey Blanche, The Mathpath (Math Nerd + Empath), is an ethical business and technology executive with global experience specialising in developing and scaling teams and processes equitably to achieve responsible growth objectives. An expert at systems-level analysis and design and a believer in data-informed methods, with extensive experience in communications (internal, external, and crisis), HR, ESG, AI governance, and go-to-market strategy. Experienced non-profit board member, startup investor, and advisor. She is currently the Director of Ethical Advisory & Strategic Partnerships at The Ethics Centre and Founder of her eponymous equitable design consultancy. She is a master’s student in AI Ethics and Society at the Leverhulme Center for the Future of Intelligence at Cambridge University.

Connect with Aubrey


More about Reimagining HR

Have you ever hoped for someone to save you time and effort by sorting through the overwhelming amount of HR content and letting you know what deserves your attention?

Join HR Game Changer Trina Sunday as she challenges conventional HR practices and dives straight into the heart of what matters. After two decades in HR, Trina understands the struggle of feeling time-poor and uninspired. She uses her knack for connection and facilitating meaningful storytelling to bring fresh perspectives from global thought leaders and real people who’ve been where you are.

From successes to setbacks, she’ll navigate it all as we strive for happy and healthy people and workplaces. Reimagining HR is your shortcut to meaningful insights and strategies that truly make a difference.

Connect with us at Reimagine HR:

 

Episode 46: Heart work and Hard Tech: Building Fairer Systems in the Age of AI with Aubrey Blanche

Ethical leadership is at the crossroads of leadership, technology and humanity

Trina Sunday: Today,

Trina Sunday: Today, we’re exploring a question that sits right at the crossroads of leadership, technology and humanity. How do we make sure progress serves everyone, not just a privileged few. Artificial intelligence is changing everything about the way we work. How we hire, develop and even decide who gets opportunity. But with all that power comes a new kind of responsibility. Because when we design systems, we’re also designing the future, the future of work. And whether we realise it or not, we’re embedding our own values into code. Joining me for this conversation is someone that’s been at the forefront of this work long before ethical AI became a buzzword. Aubrey Blanche. She’s the Director of Ethical Advisory and Strategic Partnerships at the Ethics Centre. You might know her from her pioneering work in diversity, equity, inclusion at Atlassian and Culture Amp, where she helped redefine what fairness looks like in modern workplaces. In this episode, we’ll explore how to keep humanity and fairness at the centre of innovation, what ethical leadership really looks like when the right answer isn’t clear, and why the future of leadership might m depend less on how hard we work and more on how human we’re willing to be.

Welcome to Reimagining HR with Trina Sunday, the rule breaking podcast

Welcome to Reimagining HR with Trina Sunday, the rule breaking podcast where we challenge our thinking and our current people practises. This podcast is for time poor HR teams and business leaders who are feeling the burn, lacking laughs and not feeling the love. Hi, I’m Trina, your host and I’m here to cut through the bs, explore different ways of thinking and create high impact HR functions because happier, healthier organisations are better for our people and our, uh, bottom line. So if you are keen to flip traditional HR on its head, hit the follow or subscribe button so you’re the first to know when new episodes drop. I’m here to help and also to shake things up. So let’s get started. Aubrey, it’s good to have you back on the podcast. Every time we speak, I’m reminded of how much depth and, and courage and clarity you bring to conversations around equity, ethics and leadership, and especially the future of leadership. But before we dive into bigger questions, I’d love to ground listeners a little bit who don’t know you into a little bit about you, the math path, the work you’re doing now at the Ethics Centre and the threads that connect everything you’ve done, whether that’s Atlassian or Culture Amp or beyond, a bit of the story behind the work that matters to to you. So what is it that’s kind of brought you into the space and what is it that how do you introduce yourself to people to understand all of the holistic parts of you that make you amazing?

Aubrey Blanche: Yeah. So I talk about how they call me the Math Path and where that really came from was conversations that I was having with folks where I was really struggling to articulate kind of my perspective on this work. And so Math Path is a bit of a portmanteau of math nerd plus empath M. And the idea is my work is fundamentally grounded in this belief in the dignity and kind of value of everyone. I’ve constantly been interested in these questions of what ought we do and how do we make the world better for more people? And that’s really influenced by my personal journey. So as someone who was born into a pretty tough situation, but it was adopted into a very fortunate one. I am on a case study and what happens when you get privileged kind of randomly. And so there’s a personal motivation to want to do something with that access that’s ultimately beneficial. But the master part of that is that I am once joked that like, I’m really only good at things that involve books. And what I mean by that is, and I’ve always loved math and social science. And so it’s this idea of I believe that we can do things that are ultimately positive for human welfare by taking really analytical and systems kind of thinking and analysis approach to it. And so that’s where I come into the work. And I’ve had some jobs that people think of as like diversity, equity and inclusion was kind of where I first came into this work way back in my first tech company. And then I stepped out of that and led diversity at Atlassian for five years. At Cultramp, um, I led kind of their ESG portfolio as well as people operations and you know, now I’m at the ethics centre helping them think about the ethics of artificial intelligence and working with their corporate partners to really encourage ethical decision making amongst leaders. So kind of doing the right thing in an organisational sense is really the through line of my entire career. Even though I’m much more focused on responsible technology and AI, uh, ethics now. I think that fundamentally AI problems are mostly human problems. So my people background I think is something that I use every day.

Trina Sunday: It’s really interesting how lots of people are separating AI as if it’s a tech problem. It’s interesting to hear you talk about that because I fundamentally see it as a human problem. And I was talking with a peer the other day around the fact that AI agents are becoming part of our teams. Teams are ultimately kind of human. There’s a whole complexity around how we incorporate that into our everyday because it’s here.

What are some of the headlines you’re seeing from an ethics perspective

What are some of the headlines you’re seeing from an ethics perspective? Where are the big rocks that you’re kind of putting your mind to?

Aubrey Blanche: Yeah, so I think there’s a lot around privacy and confidentiality that freaks me out. So certainly in a lot of jurisdictions there are good privacy laws. So here in Australia we have the Australian privacy principles, there’s the GDPR in Europe, obviously the us there’s some state level stuff, but federally not a lot of. But I am seeing people put information into LLMs that is one illegal, two really problematic and three, I think the most concerning part is they don’t realise that they shouldn’t be doing it because they don’t have the training and to understand the compliance aspects of that. But they also don’t likely have an understanding of the underlying technical mechanics of these tools. So they don’t understand why they’re potentially putting people in danger by putting their personal information into this, whether that’s an individual sharing their personal thoughts like they think chatgpt is a therapist to, you know, examples we’ve seen where HR recruiting professionals are putting candidate information into non enterprise grade tools. So I think that for me is a big thing. But overall I think with ethics there’s just not enough understanding of the types of questions we should be asking of these technologies. Now I’m certainly not fully a doomer, although I am a cynic or at least highly critical, but I do think that the hype that’s being sold about AI and the reality of how useful it is are quite different and I don’t necessarily think our people, people are equipped to navigate that, um, always there’s.

Trina Sunday: So much in that and I think it really resonates what you’re talking about with the candidate information. Because I did have a conversation last week with six recruiters and I think I mentioned to you it sounded like it was custom, just practise, ordinary practise to be uploading people’s CVs and resumes into ChatGPT and having it spit out candidate summaries and people were talking. It was just really interesting sitting at that table and I just observed for a while, it’s Trina. The World according to Trina. I can’t keep my opinions to myself. However, I sat there for quite some time kind of listening, but there was six people there and all six people were doing it. And I sat there thinking holy moly. Like this is not that I’ve ever said that before, but like, you know, it’s just at a level where there was no conscious thought about it. Not one person brought up ethics, not one person was considering privacy. And we’re in the privacy game, right, like HR typically, and our functions are, uh, the holders of privileged information. And it was kind of a bit mind blowing. And part of why I reached out, to be honest, because I was quite alarmed.

Aubrey Blanche: I mean, I think that’s true, but I think it goes back to like, we don’t really have the infrastructure here in Australia. And this is one of the reasons I love being at the Ethics centre is we don’t actually have the infrastructure to teach people how to make ethical decisions. Right. It’s not like the question, uh, at the ethics centre we talked about what ought we do, and it turns out that’s not exactly a straightforward question. So first you have to be clear on your own values. So we don’t try to push a particular worldview on folks that we work with. We actually get them in touch with their own views and values and then give them thoughtful kind of cognitive frameworks to make decisions in line with those values. Obviously there are certain things that are non negotiable, human dignity for everyone is required, those types of things. But I think that we don’t teach people in an engaging way about how this is relevant. So I have a course that I did on AI legislation and data privacy and one of the things we really focused on was it wasn’t a compliance training, it was a training that taught people, in this case recruiters, how to interact with these laws and what it means for their behaviour and the tasks that they complete in their job. And so I think we are not investing enough in ethical decision making training for folks. And I think the criticality of doing that is exploding exponentially because of the way these technologies work. And uh, it’s something that I find a bit frustrating is that the executives who are pushing to AI everything are not being equally as vehement, uh, about the ways that we can do that safely. Like safety is possible, but moving at the speed that I see the headlines. Running or encouraging people to run is not the way we do things safely. And to be really honest, it’s also not the way we do things effectively. Like, I really don’t think there’s a trade off between safety and responsibility and effective implementation. They’re not opposite sides of the spectrum.

Trina Sunday: I see so much parallel with the diversity, equity and inclusivity work in that statement as well, around the fact that it’s seen as this add on or an afterthought as opposed to woven into the fabric of how we design the work. You know, like if we’re designing, design being the key word here, like equitable organisations, and it’s not an add on, it makes our organisations more effective overall. It’s how do you make sure that you’re designing for everybody, not just the privileged few. It just resonates with me that it feels like we’re in the same space in the AI gamut as we were in dei.

Aubrey Blanche: I mean, I think that’s right. Obviously, um, I have experience in both and I can tell you that structurally the problems are really similar as well as kind of the necessity of systems based solutions. I also think something that can be quite difficult to explain to folks who are on the hype train is that similar to dei, the type of work that you do in AI ethics is often not sexy if it’s effective, it’s not what you think of, um, and what gets the PR in terms of that. So it can actually be quite, I mean, not to me personally, but some people can see it as quite boring work or work that slows down. And the fact is moving safely does move more slowly. But I think it’s really important to embrace the notion of you’re going slow to go fast. So the idea is, I guess the other corollary is that you often are bold or like your measure of success is a bad thing not happening. So you can’t actually prove that you’ve done something correctly because it’s like, oh, we haven’t had this catastrophic failure where our technology harms someone. Like, I can’t prove the counterfactual, even though I can tell you it’s likely that we’ve reduced the risk of a catastrophic event by a lot. And so I absolutely see the parallels. But I think in order to move forward or hopefully AI ethics doesn’t go the way of dei, uh, which hasn’t actually disappeared, which is a sidearm comment is really to think about and I think hold people accountable. So if you’re maybe not someone with institutional power, are you always asking that question, what does ethics training look like? How are we going to be to make safe, ethical and responsible decisions when it comes to this technology? And there’s a whole enterprise risk management argument to be made for this. Even if ethics is not your jam, it turns out that good risk management and ethics actually go hand in hand.

Trina Sunday: It’s a really interesting statement to say out loud, like, if Ethics is not your jam. Because I think about leadership and it’s just so. It’s so interesting, right, that it’s like we have to actually state the obvious around one of your core responsibilities as a leader is to be ethical. Right. And to be kind of leading by example with that in our organisations. But I think there’s so much, and I just be recalcitrant of me not to pick up the DEI thread in this because I think then we’ll get back on topic. Tangent warning, but I think we have had a lot of, let’s just say drama, noise, geopolitical shifts, all the things and won’t give airtime to certain things. But things have not disappeared, right? And there’s been in terms of dei, uh, and there’s conscious effort and habit where for me it’s about intention, right. Uh, like if we believe that we shouldn’t leave any one behind, then there’s an intention behind how we do things. And I think that, you know, it’s a responsibility again of leadership to be able to drive that moving forward, regardless of what politicians or other things are happening.

I’m curious what your thoughts are around where we are with dei. com on ethical AI

I’m curious what your thoughts are around where we are with dei.

Aubrey Blanche: Yeah, I totally think that’s right. I think when I’m obviously I have clients in the US and here in Australia and you know, partner with Coldtramp, still as an advisor, I see that people are moving away from the words and are often moving away from performative gestures that were never that effective at creating structural change anyway. But the core basic practises that I would think of as necessary have not actually disappeared. So things like structured performance reviews, making sure that you’re using behavioural interviewing questions, these kind of basics of good people practises haven’t gone away. And that’s actually what the core of DEI always was is creating structure and creating the kinds of processes that make it more difficult for human bias to operate. It doesn’t mean impossible, but better. And so obviously I’m not talking to organisations that don’t care about it really. You don’t. Call me if you don’t. But I’m seeing a move away from overtly social justice coded words towards more kind of normatively centrist language. I would say that the word equity has gotten spicy and I think part of that is because we haven’t figured out a way, I mean some of us in plain language, to talk about what equity means and why it’s important. Um, so I do see there are real shifts, but this like, you know, claims of dei’s Death, I think are vastly exaggerated.

Trina Sunday: I think you’re right in terms of the fact that even and from a HR perspective, and as a HR advocate that’s working, you know, predominantly with HR leaders and teams to transform mindset strategy, how we show up, you know, how we influence and improve people practises, inclusivity and that sense of belonging is just an outcome of good people practise. And similarly with AI, you know, if we’re looking for outcomes of, you know, ethical ways of doing things, again, so privilege isn’t pulling the levers and that we’ve got all of our people and using AI for good instead of evil, then again it’s just good practise. Right. But we know that technology mirrors the humans that build it and every bias, every blind spot gets amplified unless we are intentional. What do you think some of the things are that we need to be the most intentional about? If you’re thinking about AI and ethical AI.

Aubrey Blanche: So I think being able to ask questions first of like how to do an effective vendor review, so how to ask questions of your vendors about what they’ve done in terms of safety testing. So certainly getting in the discipline of reading model cards, if someone is using a wrapper on a foundational model, tell.

Trina Sunday: Me what that means.

Aubrey Blanche: It is, uh, something that was instituted by or suggested by Tim Ngeporu as a way to start standardising this. So there isn’t really a standard process for it. But basically major AI models will put out a model card that tells you things about their training, their specifications. Ideally it includes information about potential biases or safety testing that’s been done. Now I can tell you, for example, as you would kind of guess from the branding, like anthropics, model cards go much more deeply into safety testing than say Facebook or Metas or OpenAI’s. Um, but that would be one thing is so to be asking what models are being used and whether you can examine model cards, asking questions about safety training, asking questions about right of redress and opt out. So these are like basic things that you should be doing and how you thinking about how you deploy these, but also being clear that you won’t purchase something that hasn’t gone through rigorous safety testing. So I think the only way that we actually get to a good place with this, and I’m kind of speaking about HR tech now, is if the buyers are actually critical and are willing to not risk discrimination in the hiring process to speed something up. Ah. So I think it’s really important that you as the buyer have an Incredible amount of ethical power. If you think, and I believe this is true, that budgets are moral documents, if you are willing to spend money on risky technology that has not been tested for its discriminatory potential, then you are inherently consenting to discrimination in your hiring processes. And I personally don’t think that’s acceptable.

Trina Sunday: Uh, and let that sit for a second. Like there’s, uh, so much power in that. And I think there is such a separation in procurement from where HR practise and our, uh, principles, our, ah, organisational principles are in terms of HR tech. Are you watching what’s happening with Workday?

Aubrey Blanche: Yes, I think we’re all watching, but I think it’s also important, you know, when I look at these individual cases to learn a broader lesson than to point fingers at a particular company. Like, I think it’s really important that we don’t get drawn into the N equals one story when Workday, you know, for I. I don’t know the specifics of the case, but I do know folks who work there and in general they tend to uphold best practises. And so a lot of the things they’re doing are being replicated across the industry. It’s just that they’re big enough that they’re being targeted with lawsuits. So. Yeah, anyway, yes, I’ve been watching it, but I think we need to be principled in looking at the general problems and practises and questioning those and going, what can we do to do better as an industry rather than just criticising them. Yeah.

Trina Sunday: And I think it’s one of those things where I don’t think it’s company specific. Right. It’s indicative of a bigger problem, to your point, around the scrutiny that we need to put on, on what we do and how we do it. And it is a scale thing. So it’s like if you are playing in more of the spaces, then there are more people that are going to ask questions around what’s happening. And so. But I think it’s a test. It’s a test of where we are with this and I think it’s put a spotlight on it that I hadn’t really heard anyone talking about it. And it’s given me an opportunity to talk about ethical recruitment practises and the use of AI, which I didn’t have before because suddenly people are paying attention now. Workday will not be, you know, like, it’s not fair, like, just to focus to your point on them. But I think that case has actually given an opportunity to bring eth and discriminatory practise into the spotlight where it just wasn’t being talked about before, in my experience.

Aubrey Blanche: And I think that’s true. There’s something that, um.

Mhm: Design for the minority, not the majority, is important

So Bo Yun Lee, who’s the CEO of AI for all, really great thinker, you should follow her on LinkedIn. Something that she said that really impacts me is when you look at the demographics of who tends to function as an AI ethicist versus an AI researcher or technical engineer, you tend to see some pretty strong demographic shifts. So I can tell you that in Cambridge, uh, we are a quite brown, queer female cohort.

Trina Sunday: Uh, interesting.

Aubrey Blanche: There are people who do not fit those identities, but it is much more kind of underrepresented or marginalised than the average population. So I think it’s really important that we think about that and the fact that the voices of certain people are often devalued or not listened to in our society and in tech. And also the types of people who are most interested in these ethical issues also tend to come from those voices. And so there’s a bit of, like a, uh, double whammy in terms of how do we get these voices and expertise valued and implemented appropriately is we’re fighting uphill in a lot of ways because we’re the type of people who are often not taken seriously by those in power in the first place, even though we’re trying to help them out.

Trina Sunday: Yeah, this is where I just keep coming back to these dei, uh, parallels. Right. It’s the, uh. It’s not the fight of women to fix the system around gender equity. You know, it’s not our brown and black friends that are, uh, there to solve the racial problem. You know, like, societally, it’s the majority that has to address the disadvantage and the bias and the discrimination that we are projecting on minority. And I have really interesting conversations about that and something just to feed back and loop, you know, a couple of years on. There’s something, when we spoke last time around building equitable organisations that I’ve carried really fully into my work, something you shared with me, and that is, you know, that design for the minority, not the majority. And if you design for the minority, then you will benefit everybody. Right. And so there is something that I’ve been doing and leaning into a lot in terms of supporting, for example, women of colour as my guiding kind of, you know, force in the gender equity work I do. Because I’m like, if I can address, especially in an Australian context, if we can crack the racial element, generally the gender bit would be easy. It’s really powerful when you can have people share their thinking in a way where you can grab onto that and run with it. So thank you.

Aubrey Blanche: Yeah, I think that’s totally right. Oh, I’m so glad. I’m like so delighted. And I mean I’ve seen that in my work. You know, we focus a lot on you know, intersectional analysis at uh, Culture Ramp and we have a real emphasis on like the work we do around racial equity and disability inclusion. And the reality is like the non disabled white women at Culture do really well. And I’m not just like saying that in some flippant way. It’s like I’ve looked at the survey data and in general they progress. And so that’s a proof point of what we’re talking about where by designing for that margin we’re not, we’re making a principled choice not to leave anyone out because the types of structures we put in place for those intersectionally marginalised experiences end up kind of raising the tide for all boats. And so I think that’s an important principle because it gets us away from the zero sum thinking. That said, I do think a mistake that we’ve made in DEI that I’m hoping we kind of resolve in kind of the AI ethics debate is DEI became really identity led as opposed to issue led. And so in my work it can be kind of blurry to see that. But like I don’t think that focusing on particular identities ultimately is helpful because it tends to create backlash. So the idea is like at ah Culture right now, where I’m still an advisor, we’re working on improving managers ability to build psychologically safe teams. And now that particular insight that that was something that needed to be strategically important for us came from feedback we received from our black and African American employees. But we’re not building a training for managers of black employees, we’re building a training for all managers because we believe a lack of psychological safety is a general problem or could be a problem with the operation of the business. But we recognise that our black employers are the most likely to experience that. And so we consider the issue they’re experiencing and solve for the issue. So I think that’s something we also need to talk about with ethics is we use particular identity experiences as red flags, as test cases, but we actually design solutions that are more broadly applicable. Mhm.

How do you talk about the issues versus the identity when it comes to AI ethics

Trina Sunday: How do you talk about the issues versus the identity when it comes to AI ethics? How do you talk about those issues?

Aubrey Blanche: Yeah, so I’ll talk a little bit about an area that I’m really passionate about, which is the environmental impact of uh, AI. And so we can talk about, you know, we know that data centres, for example, these kind of mega data centres are being put in communities where there’s often fewer resources or that are marginalised,  primarily black and brown low income communities. Now the strategy for that is that there is less pushback from the community so it’s strategically easier and cheaper on average for the companies that are putting these in. But I will just tell you, and this is shitty, but it’s true is that if I’m like, hey, black people are being harmed by data centres, there’s a lot of people who just care. But if I’m talking about overall environmental issues and the increase in power bills that are happening because data centres are being used, suddenly there’s a lot more people who can see themselves in it. Even though that issue will disproportionately hurt low income, more likely to be, you know, racialized folks speaking in the language of the issue, even though I know that there is an identity component to who experiences it the worst is more likely to have more people see their experience reflected in that problem. And whether we want to criticise that you should do the right thing just because you should or because it impacts you, I will be honest, I don’t have time to worry about your motivations. As long as we’re all on the train together, that’s good enough for me.

Trina Sunday: And I think it goes to the point around we have and there are lots of people still fighting around talking about DEI or fighting for the language of it as opposed to just centering on the what’s the right thing that we need to be tackling here, you know. But I think it’s really interesting the identity versus because I’ve just did a podcast episode on identity and challenging HR people on their identity, you know, and how they’re positioning themselves. Because there’s some really interesting conversations about that. But I think identity can really take us away from, from what’s really going on. Right? It’s the label as opposed to what’s in the jar. And I think it’s something, it’s a bit messy in the middle and I think that’s the challenge I’m seeing. Like there’s so much ethical leadership, for example, that’s messy in the middle, you know, like where I think about our people, leaders and every blessed day they feel like there’s a trade off or the stakes feel personal and there’s attack from exec who want things to appear a certain way this is a bit cynical and there’s a groundswell from below where you’re trying to, you know, it’s messy in the middle. How do you talk to those people around ethical leadership?

Aubrey Blanche: So I’m really influenced. It was uh, in my interview back in 2019 with Didier, the CEO at Clubtraum. I asked him about his philosophy on leadership and he told me something that I think about way too often, which is he said, you know, leadership isn’t doing the right thing when there’s a, ah, right option. It’s about looking at two terrible options, selecting one and living with yourself when you have blood on your hands. And I think if we were more honest with people about that’s what leadership entails, I think it would feel less overwhelming. Not because it’s easy, I don’t think that makes it easy. But I think sometimes part of the difficulty is in when there’s a gap between our expectations and what’s actually happening. So if you tell someone that like being a leader is all about managing trade offs both in terms of operations, finance, business effectiveness and also ethics and values, I think if you explain that to people up front to say you will be accountable for making hard decisions. First of all, I think a lot more people would opt out of leadership. Don’t think there’s just a good number of people who do not want that responsibility, they just want the title. And no hate to that. It’s just that like I think if we were honest with people about what leadership costs you and what you have to live with, I think it’s different. So I think that’s step one is actually just be really honest with people about what the experience is like. I also think creating, and again I’m biassed because this is the work I do, but I think that giving people more structure and frameworks to make ethical decisions and ethical trade offs. I don’t think we’re really taught in a structured way. Certainly in Australia, not in our schools, we’re not taught it at work. And so I think that there’s more that organisations can do, whether that’s universities or, or high schools or businesses, to teach people how to think ethically within their own context. So I would put that on institutions with budgets that they should be investing in specific training around this, which I can give you the whole spiel about why it’s good from an operational perspective, but I think that’s where we have to start.

Trina Sunday: What does that look like? Give me the spiel.

Aubrey Blanche: Yeah, uh, so I think the spiel, whether you’re looking at, you know, I’m really passionate about working with young leaders and giving them actually a half day or a full day to actually work through with an expert. What does it look like to make a good ethical decision? So giving you some frameworks to think through, but then also scenarios to apply those frameworks to that are relevant to your context. So I’m working, um, later today with an organisation and we’re uh, they’re going through an AI transformation and we’re doing a series of scenarios in a workshop that’s going to look through what are the different consent, control, privacy issues that come with deploying AI in the workplace. How do you balance efficiency with legality? And so we’re working on issues that people are actually dealing with in their day jobs, but we’re giving them kind of a safe container to talk about the trade offs and figure out that messy middle before there are consequences of those decisions. So I think that’s the other thing is that we as organisational leaders have, I think, an obligation to give people the ability to practise ethics in a way that’s low stakes before we ask them to be accountable for really high.

Trina Sunday: Stakes decisions that needs to be driven through primary schools. How would things change if we’re having conversations around consent and privacy and I’m thinking about our, uh, young people that are coming up through the social media, Hunger Games, that is, you know, there’s so much in that that I think at every level we’re kind of failing in how we’re showing up ethically, not the human race. There’s such phenomenal humanity around. If you open your eyes and you look for it, there’s phenomenal accounts of people being good humans doing amazing things that inspire me every day. But I would be irresponsible if in my work I wasn’t tuning into the things that can help shift the needle. And I don’t just think about my business, but, you know, organisations I can influence that influences communities that can change the world. Right. And I have people who do laugh at that when it’s like, no, no, I’m here to change the world, you know, like. But if we’re not all showing up like that, and I know that not everyone’s built the same as me, but if we’re not showing up that way, I feel like we’re not putting in enough effort to drive change.

Aubrey Blanche: I mean, I think that’s true, but I really think, you know, for as much as I call myself a cynic, I’m actually an optimist. Like, I genuinely believe most people are good people. And so I think one of the things to your point, and not that this is a sales pitch, but, you know, the Ethics Centre, does that work? Um, we work with organisations, we work with secondary and primary school students through our partner at, uh, Primary Ethics. And so one of the things we’d love to see in the world is more of that training from early ages. So it’s age appropriate to kind of the developmental level of different students or professionals once you get into the workplace. But that’s a part of our vision, because I think there is the good intentions and the goodness of people, which is kind of the raw material. But if we don’t give them the infrastructure to actually actualize that goodness, then we actually lose out on a lot of things in the world that are absolutely possible but are held back by the inability to kind of think collectively in an ethical way. And that impacts our organisations, but it also impacts things like democratic processes and broader societies. So it’s an issue to me that, like, certainly I work on it at the area of organisations, but it has implications kind of at the largest of macro levels.

Trina Sunday: It’s one of those things where I think, you know, when this is your whole job, like, obviously you’re working at the Ethics Centre, you’re doing your Masters at Cambridge. It’s just like you are all in. Right. In terms of where we are in ethical leadership and ethical AI, I think part of the challenge we have is for this being felt. Like it’s just another thing, you know, that the stacks on and that it’s being layered, layered on. And I think the foundations of it being the way we do things is different. It’s not extra. You know, there are good humans, but it’s how do we. People don’t know what they don’t know, right? So it’s how do we give that guidance around what makes things more fair and more human. But it’s interesting because I work with a lot of HR teams and functions. I’m not hearing anyone talk about ethical leadership. No, that’s not fair. Everybody. But it’s a very small number that’s really building in a strong element of that interleadership capability, interventions. It’s just not coming up. And that’s part of why I reached out, because I’m like, how is this not coming up? It’s massive and it’s a front, it’s a new frontier.

Aubrey Blanche: I think that’s true. And I think, you know, we’re living through what we call, like the poly Crisis, you know, so like rising fascism, climate change, labour force disruption, all of these things. And I think that as things get harder, I’m having more and more conversations with leaders about ethical leadership. And so I think, not to say that I am in any way pleased about things getting difficult, but kind of similar to the example of, um, workday kind of being sued and everyone going, oh, this is something I need to think about, that when things become more difficult, that is often when people realise the urgency and importance of these kind of things because they suddenly feel ill equipped to manage something where at a lower level of complexity, they felt like they didn’t need a framework or they didn’t need guidance on those things. So I am hopeful that I wish things were not as difficult, that there is going to be a groundswell of people understanding that this work is necessary. I’m, um, also kind of heartened by some of the trends I’m seeing. So  if you’re looking at data about AI adoption in workplaces, even over the last year, there’s been actually a significant drop in people saying that they’re using AI every day. And research tells us or provides some suggestions about why. So what we know is that people with higher levels of AI literacy and fluency tend to be more sceptical of the tools and sometimes even less likely to use them because they understand their drawbacks. Right. Like LLMs just make shit up. Um, and so people are starting to get a little bit more critical of the use, although there are some interesting global patterns there. So Western countries tend to be more cynical and sceptical about AI compared to countries in the global South.

I’m excited to see people showing hesitation in using AI tools

Um, so that’s an interesting kind of global trend that’s coming out here. But so that’s something for me is that I am excited to see people stepping back and showing hesitation in using these tools because that actually gives us an opening to say, hey, I see that you’re hesitant. Often people know enough to be cautious, but not enough to proceed safely. And so that’s kind of the opening and the opportunity that I think we have. We being whoever wants to be a part of the cabal that cares about ethics and AI or leadership in general, I think that’s kind of the void that we can step into to say, I, uh, totally hear it, because people are now able to articulate a problem that we have an offering that can solve if we want to think in capitalistic terms. And so that makes me optimistic. And I just, in the last few months, having more and more conversations with people who are ready to develop those frameworks, to develop those critical lenses that help them proceed more safely.

Trina Sunday: I’m reflecting on educational inequities here, uh, listening to some elements of that. So I think I do a lot of work in Cambodia and I, uh, have a lot of connections into Africa. And I think there’s some really, you know, if we look at kind of Western, you know, if we have a more discerning lens that we’re putting on it from those of us that have more education, more privilege, again. And it’s interesting where AI is going to sit for developing countries and where we’ve got an education deficit and a disadvantage. It feels a little bit. And I haven’t had a chance to think on this and reflect on it, so I’ll probably regret instantly tomorrow what my thoughts are. But I feel like it’s a doubling down of disadvantage. Like I feel like that was the reaction I just had that I’m trying to lean into because I’m not quite sure what that means for me, but I feel, I just felt a weight from like. So again we have kind of not the lens put on it to then know that this tool might actually do you harm.

I actually am optimistic about the innovative potential of AI in developing nations

Aubrey Blanche: Yeah. So I think there’s a couple of different threads to pull out in that. So first of all, one of the things that I was, um, having a chat in the Internet comments on is this idea that LLMs are biassed. But I think it’s really important to understand that AI, not just generative AI, but AI, not only replicates, biases, it entrenches and exacerbates them because of the nature of the way these tools work. And so that not only entrenches bias in the present, but it actually often also degrades the number of potential futures that we have access to. So I know that sounds a little doom and gloom, but I think it’s really important for us to understand that the deterministic or the, the way that these tools work actually cut off potential features for us about ways things could be different. So that on one hand I find that a bit doom and gloom, but on the other hand I actually, from a developing nation perspective, I’m actually really curious to watch, for example, what’s happening in Africa. I think that we’re going to see an incredible amount of AI based innovation coming out of the continent in a way that we don’t see. And um, part of that is because they’re not bogged down in the type of legacy infrastructure that exists in the global North. And so I think we’re actually going to see great leaps forward and much more innovative, interesting ideas that come out of those communities. And so kind of similar to the idea that when we look at standpoint theory, the idea that someone who is marginalised in some way possesses unique knowledge about the world that someone who’s less marginalised does not possess. I actually am really optimistic about the innovative potential and the quality of contributions that we’re going to see from communities around the world that are more marginalised. I think we’re going to see, yeah, more interesting, creative uses of AI, different ways to develop it. And so I would say it’s a yes. And that there are these bias issues that can entrench discrimination, but there’s also probably something unique and really valuable that comes out of communities that have been historically marginalised. And I think it’s incumbent on those of us who are in the global north to actually be looking for those solutions and looking to create platforms for them, knowing that our voices can be really useful in that way.

Trina Sunday: Okay, I’ll take a sip from your cup of optimism. I will do that and I will embrace that if we zoom out, I guess, because I’m very conscious that I could talk to you all day, Aubrey, about so many different things.

Aubrey says Millennials and Gen Z are much more values focused than older generations

But when you look with the lens that you have around all the work you’ve done, I’m curious around what the next generation of leadership looks like for you.

Aubrey Blanche: So I think we’re already seeing it. So lots of research shows that Millennials and Gen Z are much more values focused than older generations. And so I think the kind of raw material is already there for people to be thinking more holistically, not just about, you know, what the shareholders are making as a result of your work, but actually what all of the stakeholders in your work are experiencing. And so I think the raw material is there, but the answer is we as, um. I’m like smack dab in the middle of the millennials, but I’m feeling a little elder millennial in this is what are us older, more established folks doing to set younger people up for success so they can have the potential to change the world in the way that I think they’re really capable of? Not to say that absolves us of our responsibility to be change makers, you know, but I think there’s. That is what are we doing to lay the foundation so that the next generation can go even further than what we were able to. I’m really, really optimistic about Gen Z and the way they are kind of taking no shit about the way that Workplaces have, have run in a way it’s not really acceptable. Shout out to, um, Amanda Lippman, who is the president of Run for Something in the us, but who has written a book called When We’re In Charge and it is like the management book for millennials about new ways to run the workplaces. And so if there was anything that you want to get a little bit of hope about the way that work and organisations can function, I would say read her book.

Trina Sunday: It’s really interesting, isn’t it? I mean I’m in Gen X, which we say we’re the forgotten generation, but it’s the biggest bucket of all time. It’s like it is the biggest age range that, you know, I’ve got friends in there heading towards 60, you know, like I remember 40. Like it’s like, why are there so many of us with such a. It’s like we’re not that homogeneous. But I think, you know, the intergenerational workplaces that we have and HR talk a lot about, you know, how challenging it is, multi generational workplaces. I don’t know that it is. If you just focus on the human, you know, like, I think it brings it back to, I think again, we overemphasise the labels and putting people in the boxes where if we just talk about how we have human first experiences and look at ourselves as, you know, it’s not just that we’re system stewards, right? Like we are the conscience of organisations from a HR perspective. I feel like we’re the ones that if the heartbeat is the humans that work for us, well, let’s just focus on the human aspect and the rest of the labels will drop away.

Aubrey Blanche: I mean, I think that’s true and I think the complexity is that in order to see all humans we have to recognise human difference in the way that those labels or experiences colour what it is to be a human, but I think there’s also a lot more. But we can’t hold too tightly to those labels because there’s so many things that are common among people, even if it’s just by degrees. So, you know, for example, I’ve never experienced anti black racism, um, obviously you can see what I look like. That’s never happened. But I know what it feels like to be excluded. And so even if I say those are comparable experiences in that they are both bad, even if mine is of a degree less severe, that’s a commonality that I can start from, that creates connection. And so I think, to your point is if we as individuals are more equipped to understand our own identities and where it places us in the spectrum of life and privilege and disadvantage. And then we can say, oh, I hear you. And that might be more extreme. You know, in my case, as a white passing person, your experience might be more extreme than mine, but I have a personal experience that gives me just a sliver of understanding of that and if I can use that to motivate me to try to address the negativity of your experience, like that’s what I see happening. And it’s not actually that complicated, but we need to get away from this like us versus them, um, zero sum language and thinking and recognise, yes, trade offs are real and we do have to make them, but we don’t always have to make them. When it comes to this work, if we’re designing it well, I love it.

Trina Sunday: Uh, Aubrey, thank you so much for the hard work that you’re doing. Really appreciate you joining me.

Aubrey Blanche: Thank you so much for having the conversation. It’s always such a joy to see you.

Trina Sunday: What a powerful conversation. Aubrey reminds us that ethics isn’t about the rules, it’s about reflection. It’s about asking the questions that keep us honest. AI uh will keep evolving and so will the challenges that come with it. Right? But no matter how complex technology becomes, our responsibility is the same to make sure progress doesn’t come at the expense of our people. Because the smartest systems in the world still rely on something profoundly human and that is our ability to care. So before we optimise, let’s pause. Before we automate, let’s ask, who benefits, who’s excluded and what would fairness look like here? If there’s one thing that I take from today, it’s this. Every system has a heartbeat. Make sure yours is still human. Thanks for tuning in and leaning in to this week’s episode as we look to reimagine how we show up for our people, organisations and community. Reach out to us via our website@ah reimaginehr.com with your HR horror stories or suggestions of people you’d love to hear from or topics you want to explore. It’s all about people, purpose and impact and we are here for all of it.

 Until next time, take care, team.

You Might Also Like…

Celebrating Impact: 100 Women Gala 2024

Celebrating Impact: 100 Women Gala 2024

This year marks the 11th anniversary of the 100 Women Gala Dinner, where we celebrated $211,005 in funding being gifted to five impactful projects! These initiatives will benefit over 323 women and girls across Australia and Cambodia, focusing on needs from healthcare and arts education to anti-trafficking efforts and cultural mentorship. Among the inspiring recipients are Homeless Healthcare, The Warrior Woman Foundation, Big hART, Free To Shine, and Ember Connect. Through the dedicated support of our members, we are creating positive change and a promising future for women and girls worldwide!

read more

The Human Edge: Critical Thinking AI & HR’s Next Frontier with Bethan Winn

Are we outsourcing our thinking to AI and what does that...

Beyond the Bonus: What We’re Really Rewarding at Work with Warren Land

Beyond the Bonus: What We’re Really Rewarding at Work is a...

Leading Beyond Fear: Trust, Courage & Making Work Human with Prina Shah

"Leading Beyond Fear: Trust, Courage & Making Work...

Who Am I Without the Job Title? The Real HR Identity Crisis

This episode takes an honest look at the HR identity...

The Future of Work Starts with Connection with Keith Ferrazzi

In The Future of Work Starts with Connection with Keith...

Soul Food at Work: Feeding Your People, Fuelling Your Culture

Soul Food at Work: Feeding Your People, Fuelling Your...

Stop Waiting to Be Chosen: Why HR Must Claim the CEO Path

Stop Waiting to Be Chosen: Why HR Must Claim the CEO Path...

Why CHROs Make the Best CEOs in Times of Chaos

Why CHROs Make the Best CEOs in Times of Chaos explores...

HR on the Big Screen: Leadership, Power & the Coldplay Kiss

LISTEN ON From the Coldplay kiss cam to a global HR...