I had the privilege of hosting a roundtable last week to discuss how AI and automation are impacting license compliance teams globally. While I’m far from an expert in this space, I’ve been focused on rapidly expanding my understanding, and this discussion surfaced several themes that really stood out:
- A key theme, unsurprisingly given the sensitive data these teams handle, was security and governance. Organizations are actively working to ensure AI adoption is done responsibly, with appropriate safeguards and oversight. At the same time, there was acknowledgment that usage is often outpacing governance, creating a need for more structured approaches.
- Many organizations are still working through how to move from theory to practice. To address this, we invited Ana Newman, Head of Revenue Enablement and Assurance at Arm, to share real-world use cases, how their team is responsibly integrating AI into workflows, and how they are measuring tangible benefits.
- A consistent message across participants was: “Start with what you already have.” Many organizations are underutilizing AI capabilities embedded in existing tools and platforms. Before building new solutions, there is significant opportunity in unlocking value from current investments.
- We also discussed how AI is enabling a shift from reactive to proactive compliance, leveraging trend analysis, anomaly detection, and external data sources to identify issues earlier and reduce audit risk.
- A recurring theme was the challenge of “not knowing what we don’t know,” and how AI tools can support combining internal data with external intelligence to help uncover gaps and new opportunities.
With an eye toward practical adoption, we also highlighted a recent event at Connor where we hosted an internal AI Hackathon, inspired by approaches used by leading organizations, including Arm. This initiative was incredibly successful in generating ideas and accelerating implementation, many of which are now part of our AI roadmap to drive efficiency and intelligence.
There was strong interest in this approach, so I wanted to share a few key parameters for others considering something similar:
- Open the competition to all levels of the organization: This drove meaningful engagement across our team, from senior leaders to new hires, and surfaced ideas from diverse perspectives.
- Set guardrails around responsible AI usage: Establishing clear boundaries ensured that innovation remained aligned with security and governance expectations.
- Provide focus areas or problem statements, but don’t be overly prescriptive: This strikes the right balance between guidance and creativity.
- Ensure finalists and winners are recognized: We included incentives for winning teams and, importantly, highlighted their work in company communications and all-hands meetings, reinforcing a culture of innovation.
One additional takeaway: AI and automation are not just changing processes; they are beginning to reshape how teams are structured and how talent is evaluated. As organizations scale through technology, there is increasing emphasis on adaptability, problem-solving, and technical fluency, rather than traditional role specialization alone.
If you’d like to connect on our Hackathon experience or discuss broader takeaways from the roundtable, I’d welcome the conversation. Click here to email me directly.