Dialogue recording & synthesis: Automating Ambiguity – Navigating AI Governance

Dialogue recording & synthesis: Automating Ambiguity – Navigating AI Governance

November 12, 2024

Demystifying AI’s Reality

The dialogue facilitated by Gerard (“Gerry”) Salole brought together three distinct yet complementary perspectives on artificial intelligence and its governance: Abeba Birhane providing critical technical and empirical analysis, Abigail Gilbert examining workplace and social implications, and Jodi Starkman offering insights from organisational implementation and human resources. Together, they painted a picture of AI that stands in stark contrast to popular narratives, revealing both its limitations and its profound societal implications.

The Technical Reality: Pattern Recognition, Not Intelligence

Abeba Birhane opened with a crucial demystification of AI, describing it not as true intelligence but as “algorithmic systems that uncover patterns from massive amounts of data through optimisation processes.”

She emphasised the limitations of these systems, noting that “generative systems have been dubbed as stochastic pirates, bullshit generators, great pretenders and glorified copy paste machines.”

This technical reality has profound implications, as these systems:

  • Rely on historical data, making them inherently backward-looking.
  • Excel at pattern matching but struggle with genuine logical reasoning.
  • Perform poorly at tasks requiring real understanding or judgment.
  • Often produce mediocre results in real-world applications

A striking example came from recent Australian corporate regulatory research, where AI summaries scored 47% compared to human summaries at 81% – highlighting the gap between AI hype and reality.

The Hidden Costs of AI Infrastructure

The dialogue revealed the often-overlooked physical and environmental impacts of AI development. As Birhane pointed out, “The total amount of energy required to run GPUs and data centres is more than the total amount of energy required to sustain Irish households.” This stark reality is reflected in:

  • Exponentially rising energy consumption, with AI expected to consume 500% more energy over the next decade in the UK alone.
  • Massive water requirements for cooling data centres, with year-over-year increases of 17-22% for major tech companies
  • Underreported resource usage, with actual consumption potentially 662% higher than claimed.
  • Global inequities in the AI supply chain, including underpaid data workers in the Global South

Military Applications and Ethical Concerns

Joseph Elborn shared recent firsthand conversations with defence contractors. His observation captured the ethical stakes: “If something happens like, I don’t know, they’re targeting a car, and then suddenly it turns out there’s kids in it, the automation won’t stop the Kill Decision.” His account revealed how AI is already operating in combat zones, with systems that switch from human control to automation when encountering jamming.

Philosophical and Anthropological Critique

Arturo Escobar, Professor Emeritus of Anthropology and Political Ecology at UNC Chapel Hill, enriched the discussion by proposing an “ontological critique” of AI. As he explained, “AI extends and deepens through its invasion of most aspects of everyday life… the Western capitalist, patriarchal ontology that is anthropocentric.”

This critique identified how AI reinforces problematic aspects of Western capitalist ontology through its anthropocentric foundations, the predominance of white male developers, and its implicit mind-body separation.

Embedded Biases and Societal Implications

A central theme was how AI systems encode and amplify existing societal biases. As Birhane noted, “A lot of the data tends to represent identities, concepts, geographies and so on, in a very negative or in a cliched, stereotypical way.” This manifests in:

  • Visual biases in image generation, reflecting and reinforcing racial stereotypes.
  • Linguistic discrimination, with AI systems rating identical content differently based on dialect.
  • Workplace bias through algorithmic management and assessment tools
  • The risk of automating and legitimising existing inequalities

The Reality of Workplace Implementation

Jodi Starkman emphasised that despite the media narrative about rapid AI transformation, organizations are still in early stages of adoption. As she noted, “There is a lot of noise about the speed of change. Which, generally, is true. But when it comes to generative AI, even leading tech companies are in the early stages of adoption and are doing a LOT of experimenting.”

This early stage presents both opportunities and challenges:

  • The need for increased digital and AI literacy across organizations
  • The importance of intentional job redesign to leverage AI augmentation
  • The necessity of addressing worker anxiety and wellbeing
  • The opportunity to ensure shared prosperity from AI-generated value

The Duality of AI Impact

Jodi Starkman characterised the potential workplace impacts of AI as “A Tale of Two Cities”:

  • AI can augment workers, amplifying their skills, or displace them if treated primarily as a cost-cutting tool
  • It can perpetuate existing biases through problematic data sets, or help identify and remove biases through intelligent scanning and analysis

She stressed that it is important to remember that “this is not either/or. It is both/and. And we have choices; decisions that policymakers and business leaders make around AI and its implementation will play a significant role in shaping its future impact on workers.

Learning from Past Technology Implementations

A crucial insight from Jodi Starkman organisational experience is that many of today’s AI implementation challenges mirror previous technological transitions. As a result, many organisations have, in fact, already developed valuable knowledge about building trust and involving workers as stakeholders in technology design, selection, and implementation.

However, as she points out, “We seem to be very resistant to applying what we learn. Or at least to making it stick.” Organizations have historically struggled to maintain lessons learned about worker involvement, trust-building, and inclusive implementation processes. This pattern of forgetting or failing to apply past lessons represents a significant risk in the current AI transformation.

The Transformation of Work

Abigail Gilbert challenged simplistic narratives about AI and employment, noting that “those who have some kind of say or status within any regime will protect their own interests when they feel under a certain type of threat.”

She identified several keys “automation archetypes” that suggest AI is fundamentally restructuring work relationships and power dynamics rather than simply eliminating jobs.

The Future of Work and Economic Justice

The dialogue engaged deeply with questions of automation and economic justice. Abby, drawing on her research with Max Casey, emphasised that “you won’t get equality as a result of meritocracy.” Her response to questions about Universal Basic Income emphasised:

  • The need for political will and mobilisation rather than technological determinism
  • Preference for universal basic services over UBI
  • Importance of job quality regulation and reduced working time.
  • Need for firm-level and national-level architectures to negotiate productivity gains.

Democracy and Power Dynamics

The dialogue revealed deep connections between AI deployment and democratic processes. As Abigail Gilbert gail observed, “Algorithms centralise control and power. This is happening at a societal level, but it’s also happening within organisations at the individual level.”

This centralisation manifests in:

  • Correlation between automation risk and political polarisation
  • The centralisation of control through algorithmic management
  • Impact on worker voice and agency
  • Military applications operating outside regulatory frameworks.

Pathways Forward: Governance and Accountability

The conversation identified several crucial areas for action. Abby explained how “the sandbox allows us to get under the bonnet of some of the data sharing agreements… and look at what’s going on to some extent in the value chain,” suggesting practical approaches to governance including:

  • Corporate Accountability
  • Worker Protection
  • Global Governance

The Role of Human Choice and Agency

A central theme emerging was the critical role of human agency in shaping AI’s impact. Jodi emphasised that despite the media narrative about rapid AI transformation, organizations are still in early stages of adoption: “There is a lot of noise about the speed of change. Which, generally, is true. But when it comes to generative AI, even leading tech companies are in the early stages of adoption and are doing a LOT of experimenting.

The pandemic experience demonstrated our capacity for rapid, positive change when necessary. “If there were any silver linings from the tragedy of the pandemic, perhaps one was the digital pivot that so many companies adopted in just a matter of days and weeks after struggling to do so for years.” (Jodi Starkman) However, the tendency to revert to old patterns, particularly visible in current “return to office” mandates, “seems to be more about outdated mindsets than informed decision making.”

This moment of AI implementation presents both “a huge opportunity for us to get better at that. And a huge risk that we won’t.” Success requires:

  • Investing in digital literacy and AI literacy in particular
  • Redesigning jobs to leverage AI augmentation effectively
  • Addressing worker anxiety and wellbeing proactively
  • Ensuring shared prosperity from AI-generated value

A Critical Moment for Action

The dialogue concluded with Abeba Birhane’s stark observation that positive outcomes require “untangling AI from the current capitalist business model.” This pointed to a fundamental challenge: how to harness AI’s potential while addressing its structural implications for human dignity, environmental sustainability, and democratic governance.

Joe Elborn challenged the speakers to identify potential positive applications of AI for democracy and civic engagement, pushing the conversation beyond critique toward constructive possibilities. Jodi Starkman responded with examples of AI tools being developed to facilitate group deliberation and find common ground, suggesting potential pathways for technology to enhance rather than undermine democratic processes.

This interplay between critique and possibility, grounded in both theoretical understanding and practical experience, characterised the richness of the dialogue and pointed toward the complex work ahead in shaping AI’s role in society.

Additional Resources & References

Links from the chat

Partnership on AI – Guidelines for AI and Shared ProsperityDr. Joy Buolamwini (MIT) –  author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines; see the documentary film, Coded Bias; she is also the founder of Algorithmic Justice League – mission is a cultural movement towards equitable and accountable AIInnovation Resource Centre for Human Resources (IRC4HR)Two Charter research project “Playbooks” on AI in the workplace that are downloadable from IRC4HR website:

AI in the workplace: How companies and workers are getting it right

Using AI in ways that enhance worker dignity and inclusion

Catalyzing Safe and Equitable Use of Artificial Intelligence in Home Health Care Work An IRC4HR in-progress research project on the use of AI with home health care workers that reflects on accountability, governance, use cases, concerns, multi-stakeholder perspectives (workers, agencies, union, medical community, patients, families). Report due mid-2025Birhane & Casey paper on meritocracy and equality (referenced by Abby Gilbert)

AI generates covertly racist decisions about people based on their dialect, Nature

Australian corporate regulator study on AI summarisation (referenced by Abeba Birhane) Compared AI vs human summarisation capabilities. AI summaries scored 47% vs human summaries at 81%. See more

UN High Level AI Advisory Body Report (mentioned by Abeba Birhane) Launched September 2024. Represents early attempt at global governance framework. Available through UN channels

Report on “Lavender” targeting system (referenced by Abeba Birhane). Details automated target generation. Includes interviews with IDF personnel. Documents decision-making timeframes. Democracy Now.

Institute for the Future of Work research (referenced by Abby Gilbert). Good Work Algorithmic Impact Assessment framework

AI Accountability Lab (mentioned by Abeba Birhane). Launching end of November 2024. Focus on evaluating AI systems and broader AI ecology. Research on corporate capture of AI regulation

Organisations using technology with a mission of “bringing underheard voices to the center of stronger civic spaces” (referenced by Jodi Starkman). Cortico – CEO and co-founder, Deb Roy, is Director of the MIT Center for Constructive Communication.

Charter Works – a next generation media and insights company focused on bridging research to practice giving people the tactical playbook for what work can and should be.

You can watch the recording here and read summary highlights below.

______________________________________________________________________